Sample records for engineering risk analysis

  1. Expert systems in civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostem, C.N.; Maher, M.L.

    1986-01-01

    This book presents the papers given at a symposium on expert systems in civil engineering. Topics considered at the symposium included problem solving using expert system techniques, construction schedule analysis, decision making and risk analysis, seismic risk analysis systems, an expert system for inactive hazardous waste site characterization, an expert system for site selection, knowledge engineering, and knowledge-based expert systems in seismic analysis.

  2. Teaching Risk Analysis in an Aircraft Gas Turbine Engine Design Capstone Course

    DTIC Science & Technology

    2016-01-01

    American Institute of Aeronautics and Astronautics 1 Teaching Risk Analysis in an Aircraft Gas Turbine Engine Design Capstone Course...development costs, engine production costs, and scheduling (Byerley A. R., 2013) as well as the linkage between turbine inlet temperature, blade cooling...analysis SE majors have studied and how this is linked to the specific issues they must face in aircraft gas turbine engine design. Aeronautical and

  3. The JPL Cost Risk Analysis Approach that Incorporates Engineering Realism

    NASA Technical Reports Server (NTRS)

    Harmon, Corey C.; Warfield, Keith R.; Rosenberg, Leigh S.

    2006-01-01

    This paper discusses the JPL Cost Engineering Group (CEG) cost risk analysis approach that accounts for all three types of cost risk. It will also describe the evaluation of historical cost data upon which this method is based. This investigation is essential in developing a method that is rooted in engineering realism and produces credible, dependable results to aid decision makers.

  4. How Engineers Really Think About Risk: A Study of JPL Engineers

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Chattopadhyay, Deb; Valerdi, Ricardo

    2011-01-01

    The objectives of this work are: To improve risk assessment practices as used during the mission design process by JPL's concurrent engineering teams. (1) Developing effective ways to identify and assess mission risks (2) Providing a process for more effective dialog between stakeholders about the existence and severity of mission risks (3) Enabling the analysis of interactions of risks across concurrent engineering roles.

  5. Orbit Transfer Vehicle (OTV) engine, phase A study. Volume 2: Study

    NASA Technical Reports Server (NTRS)

    Mellish, J. A.

    1979-01-01

    The hydrogen oxygen engine used in the orbiter transfer vehicle is described. The engine design is analyzed and minimum engine performance and man rating requirements are discussed. Reliability and safety analysis test results are presented and payload, risk and cost, and engine installation parameters are defined. Engine tests were performed including performance analysis, structural analysis, thermal analysis, turbomachinery analysis, controls analysis, and cycle analysis.

  6. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  7. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  8. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  9. National Research Council Dialogue to Assess Progress on NASA's Systems Engineering Cost/Risk Analysis Capability Roadmap Development: General Background and Introduction

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria

    2005-01-01

    Contents include the following: General Background and Introduction of Capability. Roadmaps for Systems Engineering Cost/Risk Analysis. Agency Objectives. Strategic Planning Transformation. Review Capability Roadmaps and Schedule. Review Purpose of NRC Review. Capability Roadmap Development (Progress to Date).

  10. Application of Statistics in Engineering Technology Programs

    ERIC Educational Resources Information Center

    Zhan, Wei; Fink, Rainer; Fang, Alex

    2010-01-01

    Statistics is a critical tool for robustness analysis, measurement system error analysis, test data analysis, probabilistic risk assessment, and many other fields in the engineering world. Traditionally, however, statistics is not extensively used in undergraduate engineering technology (ET) programs, resulting in a major disconnect from industry…

  11. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  12. Bridging the Engineering and Medicine Gap

    NASA Technical Reports Server (NTRS)

    Walton, M.; Antonsen, E.

    2018-01-01

    A primary challenge NASA faces is communication between the disparate entities of engineers and human system experts in life sciences. Clear communication is critical for exploration mission success from the perspective of both risk analysis and data handling. The engineering community uses probabilistic risk assessment (PRA) models to inform their own risk analysis and has extensive experience managing mission data, but does not always fully consider human systems integration (HSI). The medical community, as a part of HSI, has been working 1) to develop a suite of tools to express medical risk in quantitative terms that are relatable to the engineering approaches commonly in use, and 2) to manage and integrate HSI data with engineering data. This talk will review the development of the Integrated Medical Model as an early attempt to bridge the communication gap between the medical and engineering communities in the language of PRA. This will also address data communication between the two entities in the context of data management considerations of the Medical Data Architecture. Lessons learned from these processes will help identify important elements to consider in future communication and integration of these two groups.

  13. Novel Risk Engine for Diabetes Progression and Mortality in USA: Building, Relating, Assessing, and Validating Outcomes (BRAVO).

    PubMed

    Shao, Hui; Fonseca, Vivian; Stoecker, Charles; Liu, Shuqian; Shi, Lizheng

    2018-05-03

    There is an urgent need to update diabetes prediction, which has relied on the United Kingdom Prospective Diabetes Study (UKPDS) that dates back to 1970 s' European populations. The objective of this study was to develop a risk engine with multiple risk equations using a recent patient cohort with type 2 diabetes mellitus reflective of the US population. A total of 17 risk equations for predicting diabetes-related microvascular and macrovascular events, hypoglycemia, mortality, and progression of diabetes risk factors were estimated using the data from the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial (n = 10,251). Internal and external validation processes were used to assess performance of the Building, Relating, Assessing, and Validating Outcomes (BRAVO) risk engine. One-way sensitivity analysis was conducted to examine the impact of risk factors on mortality at the population level. The BRAVO risk engine added several risk factors including severe hypoglycemia and common US racial/ethnicity categories compared with the UKPDS risk engine. The BRAVO risk engine also modeled mortality escalation associated with intensive glycemic control (i.e., glycosylated hemoglobin < 6.5%). External validation showed a good prediction power on 28 endpoints observed from other clinical trials (slope = 1.071, R 2  = 0.86). The BRAVO risk engine for the US diabetes cohort provides an alternative to the UKPDS risk engine. It can be applied to assist clinical and policy decision making such as cost-effective resource allocation in USA.

  14. Risk evaluation of highway engineering project based on the fuzzy-AHP

    NASA Astrophysics Data System (ADS)

    Yang, Qian; Wei, Yajun

    2011-10-01

    Engineering projects are social activities, which integrate with technology, economy, management and organization. There are uncertainties in each respect of engineering projects, and it needs to strengthen risk management urgently. Based on the analysis of the characteristics of highway engineering, and the study of the basic theory on risk evaluation, the paper built an index system of highway project risk evaluation. Besides based on fuzzy mathematics principle, analytical hierarchy process was used and as a result, the model of the comprehensive appraisal method of fuzzy and AHP was set up for the risk evaluation of express way concessionary project. The validity and the practicability of the risk evaluation of expressway concessionary project were verified after the model was applied to the practice of a project.

  15. Compendium of Abstracts on Statistical Applications in Geotechnical Engineering.

    DTIC Science & Technology

    1983-09-01

    research in the application of probabilistic and statistical methods to soil mechanics, rock mechanics, and engineering geology problems have grown markedly...probability, statistics, soil mechanics, rock mechanics, and engineering geology. 2. The purpose of this report is to make available to the U. S...Deformation Dynamic Response Analysis Seepage, Soil Permeability and Piping Earthquake Engineering, Seismology, Settlement and Heave Seismic Risk Analysis

  16. Automotive Stirling Engine Mod 1 Design Review, Volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Risk assessment, safety analysis of the automotive stirling engine (ASE) mod I, design criteria and materials properties for the ASE mod I and reference engines, combustion are flower development, and the mod I engine starter motor are discussed. The stirling engine system, external heat system, hot engine system, cold engine system, and engine drive system are also discussed.

  17. Quantifying the Metrics That Characterize Safety Culture of Three Engineered Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Julie; Ernesti, Mary; Tokuhiro, Akira

    2002-07-01

    With potential energy shortages and increasing electricity demand, the nuclear energy option is being reconsidered in the United States. Public opinion will have a considerable voice in policy decisions that will 'road-map' the future of nuclear energy in this country. This report is an extension of the last author's work on the 'safety culture' associated with three engineered systems (automobiles, commercial airplanes, and nuclear power plants) in Japan and the United States. Safety culture, in brief is defined as a specifically developed culture based on societal and individual interpretations of the balance of real, perceived, and imagined risks versus themore » benefits drawn from utilizing a given engineered systems. The method of analysis is a modified scale analysis, with two fundamental Eigen-metrics, time- (t) and number-scales (N) that describe both engineered systems and human factors. The scale analysis approach is appropriate because human perception of risk, perception of benefit and level of (technological) acceptance are inherently subjective, therefore 'fuzzy' and rarely quantifiable in exact magnitude. Perception of risk, expressed in terms of the psychometric factors 'dread risk' and 'unknown risk', contains both time- and number-scale elements. Various engineering system accidents with fatalities, reported by mass media are characterized by t and N, and are presented in this work using the scale analysis method. We contend that level of acceptance infers a perception of benefit at least two orders larger magnitude than perception of risk. The 'amplification' influence of mass media is also deduced as being 100- to 1000-fold the actual number of fatalities/serious injuries in a nuclear-related accident. (authors)« less

  18. Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method

    PubMed Central

    Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan

    2018-01-01

    Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824

  19. Understanding safety and production risks in rail engineering planning and protection.

    PubMed

    Wilson, John R; Ryan, Brendan; Schock, Alex; Ferreira, Pedro; Smith, Stuart; Pitsopoulos, Julia

    2009-07-01

    Much of the published human factors work on risk is to do with safety and within this is concerned with prediction and analysis of human error and with human reliability assessment. Less has been published on human factors contributions to understanding and managing project, business, engineering and other forms of risk and still less jointly assessing risk to do with broad issues of 'safety' and broad issues of 'production' or 'performance'. This paper contains a general commentary on human factors and assessment of risk of various kinds, in the context of the aims of ergonomics and concerns about being too risk averse. The paper then describes a specific project, in rail engineering, where the notion of a human factors case has been employed to analyse engineering functions and related human factors issues. A human factors issues register for potential system disturbances has been developed, prior to a human factors risk assessment, which jointly covers safety and production (engineering delivery) concerns. The paper concludes with a commentary on the potential relevance of a resilience engineering perspective to understanding rail engineering systems risk. Design, planning and management of complex systems will increasingly have to address the issue of making trade-offs between safety and production, and ergonomics should be central to this. The paper addresses the relevant issues and does so in an under-published domain - rail systems engineering work.

  20. Optical and system engineering in the development of a high-quality student telescope kit

    NASA Astrophysics Data System (ADS)

    Pompea, Stephen M.; Pfisterer, Richard N.; Ellis, Scott; Arion, Douglas N.; Fienberg, Richard Tresch; Smith, Thomas C.

    2010-07-01

    The Galileoscope student telescope kit was developed by a volunteer team of astronomers, science education experts, and optical engineers in conjunction with the International Year of Astronomy 2009. This refracting telescope is in production with over 180,000 units produced and distributed with 25,000 units in production. The telescope was designed to be able to resolve the rings of Saturn and to be used in urban areas. The telescope system requirements, performance metrics, and architecture were established after an analysis of current inexpensive telescopes and student telescope kits. The optical design approaches used in the various prototypes and the optical system engineering tradeoffs will be described. Risk analysis, risk management, and change management were critical as was cost management since the final product was to cost around 15 (but had to perform as well as 100 telescopes). In the system engineering of the Galileoscope a variety of analysis and testing approaches were used, including stray light design and analysis using the powerful optical analysis program FRED.

  1. AN IMPROVEMENT TO THE MOUSE COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM

    EPA Science Inventory

    The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cast or risk analysis equations. It was especially intended for use by individuals with l...

  2. Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Joseph Daniel; Anderson, Robert Stephen

    Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operator’s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operationmore » can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.« less

  3. The reliability-quality relationship for quality systems and quality risk management.

    PubMed

    Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M

    2012-01-01

    Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.

  4. Extended Editorial: Research and Education in Reliability, Maintenance, Quality Control, Risk and Safety.

    ERIC Educational Resources Information Center

    Ramalhoto, M. F.

    1999-01-01

    Introduces a special theme journal issue on research and education in quality control, maintenance, reliability, risk analysis, and safety. Discusses each of these theme concepts and their applications to naval architecture, marine engineering, and industrial engineering. Considers the effects of the rapid transfer of research results through…

  5. 49 CFR 238.203 - Static end strength.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., sufficient to describe the actual construction of the equipment; (iii) Engineering analysis sufficient to..., engineering analysis, and risk mitigation measures described in this paragraph, demonstrating that the use of... the Federal Docket Management System and posted on its web site at http://www.regulations.gov. (h...

  6. 49 CFR 238.203 - Static end strength.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., sufficient to describe the actual construction of the equipment; (iii) Engineering analysis sufficient to..., engineering analysis, and risk mitigation measures described in this paragraph, demonstrating that the use of... the Federal Docket Management System and posted on its web site at http://www.regulations.gov. (h...

  7. Shortcuts in complex engineering systems: a principal-agent approach to risk management.

    PubMed

    Garber, Russ; Paté-Cornell, Elisabeth

    2012-05-01

    In this article, we examine the effects of shortcuts in the development of engineered systems through a principal-agent model. We find that occurrences of illicit shortcuts are closely related to the incentive structure and to the level of effort that the agent is willing to expend from the beginning of the project to remain on schedule. Using a probabilistic risk analysis to determine the risks of system failure from these shortcuts, we show how a principal can choose optimal settings (payments, penalties, and inspections) that can deter an agent from cutting corners and maximize the principal's value through increased agent effort. We analyze the problem for an agent with limited liability. We consider first the case where he is risk neutral; we then include the case where he is risk averse. © 2011 Society for Risk Analysis.

  8. Borescope Inspection Management for Engine

    NASA Astrophysics Data System (ADS)

    Zhongda, Yuan

    2018-03-01

    In this paper, we try to explain the problems need to be improved from the two perspectives of maintenance program management and maintenance human risk control. On the basis of optimization analysis of borescope inspection maintenance scheme, the defect characteristics and expansion rules of engine heat terminal components are summarized, and some optimization measures are introduced. This paper analyses human risk problem of engine hole from the aspects of qualification management, training requirements and perfection of system, and puts forward some suggestions on management.

  9. ARC-2009-ACD09-0063-001

    NASA Image and Video Library

    2009-04-16

    Aeronautics Technical Seminar: Dr. Elisabeth Pate-Cornell, Burt and Deedee McMurtry professor and chair of the Department of Management Science and Engineering at Stanford University presents 'Lessons Learned in Applying Engineering Risk Analysis'.

  10. ARC-2009-ACD09-0063-002

    NASA Image and Video Library

    2009-04-16

    Aeronautics Technical Seminar: Dr. Elisabeth Pate-Cornell, Burt and Deedee McMurtry professor and chair of the Department of Management Science and Engineering at Stanford University presents 'Lessons Learned in Applying Engineering Risk Analysis'.

  11. ARC-2009-ACD09-0063-005

    NASA Image and Video Library

    2009-04-16

    Aeronautics Technical Seminar: Dr. Elisabeth Pate-Cornell, Burt and Deedee McMurtry professor and chair of the Department of Management Science and Engineering at Stanford University presents 'Lessons Learned in Applying Engineering Risk Analysis'.

  12. ARC-2009-ACD09-0063-004

    NASA Image and Video Library

    2009-04-16

    Aeronautics Technical Seminar: Dr. Elisabeth Pate-Cornell, Burt and Deedee McMurtry professor and chair of the Department of Management Science and Engineering at Stanford University presents 'Lessons Learned in Applying Engineering Risk Analysis'.

  13. ARC-2009-ACD09-0063-003

    NASA Image and Video Library

    2009-04-16

    Aeronautics Technical Seminar: Dr. Elisabeth Pate-Cornell, Burt and Deedee McMurtry professor and chair of the Department of Management Science and Engineering at Stanford University presents 'Lessons Learned in Applying Engineering Risk Analysis'.

  14. Introduction to Flight Test Engineering (Introduction aux techniques des essais en vol)

    DTIC Science & Technology

    2005-07-01

    or aircraft parameters • Calculations in the frequency domain ( Fast Fourier Transform) • Data analysis with dedicated software for: • Signal...density Fast Fourier Transform Transfer function analysis Frequency response analysis Etc. PRESENTATION Color/black & white Display screen...envelope by operating the airplane at increasing ranges - representing increasing risk - of engine operation, airspeeds both fast and slow, altitude

  15. Application of Probabilistic Methods to Assess Risk Due to Resonance in the Design of J-2X Rocket Engine Turbine Blades

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; DeHaye, Michael; DeLessio, Steven

    2011-01-01

    The LOX-Hydrogen J-2X Rocket Engine, which is proposed for use as an upper-stage engine for numerous earth-to-orbit and heavy lift launch vehicle architectures, is presently in the design phase and will move shortly to the initial development test phase. Analysis of the design has revealed numerous potential resonance issues with hardware in the turbomachinery turbine-side flow-path. The analysis of the fuel pump turbine blades requires particular care because resonant failure of the blades, which are rotating in excess of 30,000 revolutions/minutes (RPM), could be catastrophic for the engine and the entire launch vehicle. This paper describes a series of probabilistic analyses performed to assess the risk of failure of the turbine blades due to resonant vibration during past and present test series. Some significant results are that the probability of failure during a single complete engine hot-fire test is low (1%) because of the small likelihood of resonance, but that the probability increases to around 30% for a more focused turbomachinery-only test because all speeds will be ramped through and there is a greater likelihood of dwelling at more speeds. These risk calculations have been invaluable for use by program management in deciding if risk-reduction methods such as dampers are necessary immediately or if the test can be performed before the risk-reduction hardware is ready.

  16. Engineering risk reduction in satellite programs

    NASA Technical Reports Server (NTRS)

    Dean, E. S., Jr.

    1979-01-01

    Methods developed in planning and executing system safety engineering programs for Lockheed satellite integration contracts are presented. These procedures establish the applicable safety design criteria, document design compliance and assess the residual risks where non-compliant design is proposed, and provide for hazard analysis of system level test, handling and launch preparations. Operations hazard analysis identifies product protection and product liability hazards prior to the preparation of operational procedures and provides safety requirements for inclusion in them. The method developed for documenting all residual hazards for the attention of program management assures an acceptable minimum level of risk prior to program deployment. The results are significant for persons responsible for managing or engineering the deployment and production of complex high cost equipment under current product liability law and cost/time constraints, have a responsibility to minimize the possibility of an accident, and should have documentation to provide a defense in a product liability suit.

  17. An Example of Risk Informed Design

    NASA Technical Reports Server (NTRS)

    Banke, Rick; Grant, Warren; Wilson, Paul

    2014-01-01

    NASA Engineering requested a Probabilistic Risk Assessment (PRA) to compare the difference in the risk of Loss of Crew (LOC) and Loss of Mission (LOM) between different designs of a fluid assembly. They were concerned that the configuration favored by the design team was more susceptible to leakage than a second proposed design, but realized that a quantitative analysis to compare the risks between the two designs might strengthen their argument. The analysis showed that while the second design did help improve the probability of LOC, it did not help from a probability of LOM perspective. This drove the analysis team to propose a minor design change that would drive the probability of LOM down considerably. The analysis also demonstrated that there was another major risk driver that was not immediately obvious from a typical engineering study of the design and was therefore unexpected. None of the proposed alternatives were addressing this risk. This type of trade study demonstrates the importance of performing a PRA in order to completely understand a system's design. It allows managers to use risk as another one of the commodities (e.g., mass, cost, schedule, fault tolerance) that can be traded early in the design of a new system.

  18. An integrated tool to support engineers for WMSDs risk assessment during the assembly line balancing.

    PubMed

    Di Benedetto, Raffaele; Fanti, Michele

    2012-01-01

    This paper wants to present an integrated approach to Line Balancing and Risk Assessment and a Software Tool named ErgoAnalysis that makes it easy to control the whole production process and produces a Risk Index for the actual work tasks in an Assembly Line. Assembly Line Balancing, or simply Line Balancing, is the problem of assigning operations to workstations along an assembly line, in such a way that the assignment be optimal in some sense. Assembly lines are characterized by production constraints and restrictions due to several aspects such as the nature of the product and the flow of orders. To be able to respond effectively to the needs of production, companies need to frequently change the workload and production models. Each manufacturing process might be quite different from another. To optimize very specific operations, assembly line balancing might utilize a number of methods and the Engineer must consider ergonomic constraints, in order to reduce the risk of WMDSs. Risk Assessment may result very expensive because the Engineer must evaluate it at every change. ErgoAnalysis can reduce cost and improve effectiveness in Risk Assessment during the Line Balancing.

  19. System safety in Stirling engine development

    NASA Technical Reports Server (NTRS)

    Bankaitis, H.

    1981-01-01

    The DOE/NASA Stirling Engine Project Office has required that contractors make safety considerations an integral part of all phases of the Stirling engine development program. As an integral part of each engine design subtask, analyses are evolved to determine possible modes of failure. The accepted system safety analysis techniques (Fault Tree, FMEA, Hazards Analysis, etc.) are applied in various degrees of extent at the system, subsystem and component levels. The primary objectives are to identify critical failure areas, to enable removal of susceptibility to such failures or their effects from the system and to minimize risk.

  20. Probabilistic/Fracture-Mechanics Model For Service Life

    NASA Technical Reports Server (NTRS)

    Watkins, T., Jr.; Annis, C. G., Jr.

    1991-01-01

    Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.

  1. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  2. Predictors of obesity in Michigan Operating Engineers.

    PubMed

    Duffy, Sonia A; Cohen, Kathleen A; Choi, Seung Hee; McCullagh, Marjorie C; Noonan, Devon

    2012-06-01

    Blue collar workers are at risk for obesity. Little is known about obesity in Operating Engineers, a group of blue collar workers, who operate heavy earth-moving equipment in road building and construction. Therefore, 498 Operating Engineers in Michigan were recruited to participate in a cross-sectional survey to determine variables related to obesity in this group. Bivariate and multivariate analyses were conducted to determine personal, psychological, and behavioral factors predicting obesity. Approximately 45% of the Operating Engineers screened positive for obesity, and another 40% were overweight. Multivariate analysis revealed that younger age, male sex, higher numbers of self-reported co-morbidities, not smoking, and low physical activity levels were significantly associated with obesity among Operating Engineers. Operating Engineers are significantly at risk for obesity, and workplace interventions are needed to address this problem.

  3. A New Approach in Applying Systems Engineering Tools and Analysis to Determine Hepatocyte Toxicogenomics Risk Levels to Human Health.

    PubMed

    Gigrich, James; Sarkani, Shahryar; Holzer, Thomas

    2017-03-01

    There is an increasing backlog of potentially toxic compounds that cannot be evaluated with current animal-based approaches in a cost-effective and expeditious manner, thus putting human health at risk. Extrapolation of animal-based test results for human risk assessment often leads to different physiological outcomes. This article introduces the use of quantitative tools and methods from systems engineering to evaluate the risk of toxic compounds by the analysis of the amount of stress that human hepatocytes undergo in vitro when metabolizing GW7647 1 over extended times and concentrations. Hepatocytes are exceedingly connected systems that make it challenging to understand the highly varied dimensional genomics data to determine risk of exposure. Gene expression data of peroxisome proliferator-activated receptor-α (PPARα) 2 binding was measured over multiple concentrations and varied times of GW7647 exposure and leveraging mahalanombis distance to establish toxicity threshold risk levels. The application of these novel systems engineering tools provides new insight into the intricate workings of human hepatocytes to determine risk threshold levels from exposure. This approach is beneficial to decision makers and scientists, and it can help reduce the backlog of untested chemical compounds due to the high cost and inefficiency of animal-based models.

  4. Bridging the two cultures of risk analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasanoff, S.

    1993-04-01

    During the past 15 years, risk analysis has come of age as an interdisciplinary field of remarkable breadth, nurturing connections among fields as diverse as mathematics, biostatistics, toxicology, and engineering on one hand, and law, psychology, sociology, and economics on the other hand. In this editorial, the author addresses the question: What has the presence of social scientists in the network meant to the substantive development of the field of risk analysis The answers offered here discuss the substantial progress in bridging the two cultures of risk analysis. Emphasis is made of the continual need for monitoring risk analysis. Topicsmore » include: the micro-worlds of risk assessment; constraining assumptions; and exchange programs. 14 refs.« less

  5. Model-Based Engineering for Supply Chain Risk Management

    DTIC Science & Technology

    2015-09-30

    Privacy, 2009 [19] Julien Delange Wheel Brake System Example using AADL; Feiler, Peter; Hansson, Jörgen; de Niz, Dionisio; & Wrage, Lutz. System ...University Software Engineering Institute Abstract—Expanded use of commercial components has increased the complexity of system assurance...verification. Model- based engineering (MBE) offers a means to design, develop, analyze, and maintain a complex system architecture. Architecture Analysis

  6. Occupational and genetic risk factors associated with intervertebral disc disease.

    PubMed

    Virtanen, Iita M; Karppinen, Jaro; Taimela, Simo; Ott, Jürg; Barral, Sandra; Kaikkonen, Kaisu; Heikkilä, Olli; Mutanen, Pertti; Noponen, Noora; Männikkö, Minna; Tervonen, Osmo; Natri, Antero; Ala-Kokko, Leena

    2007-05-01

    Cross-sectional epidemiologic study. To evaluate the interaction between known genetic risk factors and whole-body vibration for symptomatic intervertebral disc disease (IDD) in an occupational sample. Risk factors of IDD include, among others, whole-body vibration and heredity. In this study, the importance of a set of known genetic risk factors and whole-body vibration was evaluated in an occupational sample of train engineers and sedentary controls. Eleven variations in 8 genes (COL9A2, COL9A3, COL11A2, IL1A, IL1B, IL6, MMP-3, and VDR) were genotyped in 150 male train engineers with an average of 21-year exposure to whole-body vibration and 61 male paper mill workers with no exposure to vibration. Subjects were classified into IDD-phenotype and asymptomatic groups, based on the latent class analysis. The number of individuals belonging to the IDD-phenotype was significantly higher among train engineers (42% of train engineers vs. 17.5% of sedentary workers; P = 0.005). IL1A -889T allele represented a significant risk factor for the IDD-phenotype both in the single marker allelic association test (P = 0.043) and in the logistic regression analysis (P = 0.01). None of the other allele markers was significantly associated with symptoms when analyzed independently. However, for all the SNP markers considered, whole-body vibration represents a nominally significant risk factor. The results suggest that whole-body vibration is a risk factor for symptomatic IDD. Moreover, whole-body vibration had an additive effect with genetic risk factors increasing the likelihood of belonging to the IDD-phenotype group. Of the independent genetic markers, IL1A -889T allele had strongest association with IDD-phenotype.

  7. Risk and Reliability of Infrastructure Asset Management Workshop

    DTIC Science & Technology

    2006-08-01

    of assets within the portfolio for use in Risk and Reliability analysis ... US Army Corps of Engineers assesses its Civil Works infrastructure and applies risk and reliability in the management of that infrastructure. The ... the Corps must complete assessments across its portfolio of major assets before risk management can be used in decision making. Effective risk

  8. Engineering Management Capstone Project EM 697: Compare and Contrast Risk Management Implementation at NASA and the US Army

    NASA Technical Reports Server (NTRS)

    Brothers, Mary Ann; Safie, Fayssal M. (Technical Monitor)

    2002-01-01

    NASA at Marshall Space Flight Center (MSFC) and the U.S. Army at Redstone Arsenal were analyzed to determine whether they were successful in implementing their risk management program. Risk management implementation surveys were distributed to aid in this analysis. The scope is limited to NASA S&MA (Safety and Mission Assurance) at MSFC, including applicable support contractors, and the US Army Engineering Directorate, including applicable contractors, located at Redstone Arsenal. NASA has moderately higher risk management implementation survey scores than the Army. Accordingly, the implementation of the risk management program at NASA is considered good while only two of five of the survey categories indicated that the risk management implementation is good at the Army.

  9. The Need for Cyber-Informed Engineering Expertise for Nuclear Research Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert Stephen

    Engineering disciplines may not currently understand or fully embrace cyber security aspects as they apply towards analysis, design, operation, and maintenance of nuclear research reactors. Research reactors include a wide range of diverse co-located facilities and designs necessary to meet specific operational research objectives. Because of the nature of research reactors (reduced thermal energy and fission product inventory), hazards and risks may not have received the same scrutiny as normally associated with power reactors. Similarly, security may not have been emphasized either. However, the lack of sound cybersecurity defenses may lead to both safety and security impacts. Risk management methodologiesmore » may not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Although most research reactors are old and may not have the same digital footprint as newer facilities, any digital instrument and control function must be considered as a potential attack platform that can lead to sabotage or theft of nuclear material, especially for some research reactors that store highly enriched uranium. This paper will provide a discussion about the need for cyber-informed engineering practices that include the entire engineering lifecycle. Cyber-informed engineering as referenced in this paper is the inclusion of cybersecurity aspects into the engineering process. A discussion will consider several attributes of this process evaluating the long-term goal of developing additional cyber safety basis analysis and trust principles. With a culture of free information sharing exchanges, and potentially a lack of security expertise, new risk analysis and design methodologies need to be developed to address this rapidly evolving (cyber) threatscape.« less

  10. Probabilistic risk assessment of the Space Shuttle. Phase 3: A study of the potential of losing the vehicle during nominal operation. Volume 5: Auxiliary shuttle risk analyses

    NASA Technical Reports Server (NTRS)

    Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.

    1995-01-01

    Volume 5 is Appendix C, Auxiliary Shuttle Risk Analyses, and contains the following reports: Probabilistic Risk Assessment of Space Shuttle Phase 1 - Space Shuttle Catastrophic Failure Frequency Final Report; Risk Analysis Applied to the Space Shuttle Main Engine - Demonstration Project for the Main Combustion Chamber Risk Assessment; An Investigation of the Risk Implications of Space Shuttle Solid Rocket Booster Chamber Pressure Excursions; Safety of the Thermal Protection System of the Space Shuttle Orbiter - Quantitative Analysis and Organizational Factors; Space Shuttle Main Propulsion Pressurization System Probabilistic Risk Assessment, Final Report; and Space Shuttle Probabilistic Risk Assessment Proof-of-Concept Study - Auxiliary Power Unit and Hydraulic Power Unit Analysis Report.

  11. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Technical Reports Server (NTRS)

    Monell, Donald W.; Piland, William M.

    2000-01-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operation). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographical distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across Agency.

  12. Orbital Transfer Vehicle (OTV) engine study. Phase A: Extension

    NASA Technical Reports Server (NTRS)

    Sobin, A. J.

    1980-01-01

    The current Phase A-Extension of the OTV engine study program aims to provide additional expander and staged combustion cycle data that will lead to design definition of the OTV engine. The proposed program effort seeks to optimize the expander cycle engine concept (consistent with identified OTV engine requirements), investigate the feasibility of kitting the staged combustion cycle engine to provide extended thrust operation, and conduct in-depth analysis of development risk, crew safety, and reliability for both cycles. Additional tasks address the costing of a 10/K thrust expander cycle engine and support of OTV systems study contractors.

  13. Scientists versus Regulators: Precaution, Novelty & Regulatory Oversight as Predictors of Perceived Risks of Engineered Nanomaterials

    PubMed Central

    Beaudrie, Christian E. H.; Satterfield, Terre; Kandlikar, Milind; Harthorn, Barbara H.

    2014-01-01

    Engineered nanoscale materials (ENMs) present a difficult challenge for risk assessors and regulators. Continuing uncertainty about the potential risks of ENMs means that expert opinion will play an important role in the design of policies to minimize harmful implications while supporting innovation. This research aims to shed light on the views of ‘nano experts’ to understand which nanomaterials or applications are regarded as more risky than others, to characterize the differences in risk perceptions between expert groups, and to evaluate the factors that drive these perceptions. Our analysis draws from a web-survey (N = 404) of three groups of US and Canadian experts: nano-scientists and engineers, nano-environmental health and safety scientists, and regulatory scientists and decision-makers. Significant differences in risk perceptions were found across expert groups; differences found to be driven by underlying attitudes and perceptions characteristic of each group. Nano-scientists and engineers at the upstream end of the nanomaterial life cycle perceived the lowest levels of risk, while those who are responsible for assessing and regulating risks at the downstream end perceived the greatest risk. Perceived novelty of nanomaterial risks, differing preferences for regulation (i.e. the use of precaution versus voluntary or market-based approaches), and perceptions of the risk of technologies in general predicted variation in experts' judgments of nanotechnology risks. Our findings underscore the importance of involving a diverse selection of experts, particularly those with expertise at different stages along the nanomaterial lifecycle, during policy development. PMID:25222742

  14. Scientists versus regulators: precaution, novelty & regulatory oversight as predictors of perceived risks of engineered nanomaterials.

    PubMed

    Beaudrie, Christian E H; Satterfield, Terre; Kandlikar, Milind; Harthorn, Barbara H

    2014-01-01

    Engineered nanoscale materials (ENMs) present a difficult challenge for risk assessors and regulators. Continuing uncertainty about the potential risks of ENMs means that expert opinion will play an important role in the design of policies to minimize harmful implications while supporting innovation. This research aims to shed light on the views of 'nano experts' to understand which nanomaterials or applications are regarded as more risky than others, to characterize the differences in risk perceptions between expert groups, and to evaluate the factors that drive these perceptions. Our analysis draws from a web-survey (N = 404) of three groups of US and Canadian experts: nano-scientists and engineers, nano-environmental health and safety scientists, and regulatory scientists and decision-makers. Significant differences in risk perceptions were found across expert groups; differences found to be driven by underlying attitudes and perceptions characteristic of each group. Nano-scientists and engineers at the upstream end of the nanomaterial life cycle perceived the lowest levels of risk, while those who are responsible for assessing and regulating risks at the downstream end perceived the greatest risk. Perceived novelty of nanomaterial risks, differing preferences for regulation (i.e. the use of precaution versus voluntary or market-based approaches), and perceptions of the risk of technologies in general predicted variation in experts' judgments of nanotechnology risks. Our findings underscore the importance of involving a diverse selection of experts, particularly those with expertise at different stages along the nanomaterial lifecycle, during policy development.

  15. Virtues and Limitations of Risk Analysis

    ERIC Educational Resources Information Center

    Weatherwax, Robert K.

    1975-01-01

    After summarizing the Rasmussion Report, the author reviews the probabilistic portion of the report from the perspectives of engineering utility and risk assessment uncertainty. The author shows that the report may represent a significant step forward in the assurance of reactor safety and an imperfect measure of actual reactor risk. (BT)

  16. Seismic hazard assessment: Issues and alternatives

    USGS Publications Warehouse

    Wang, Z.

    2011-01-01

    Seismic hazard and risk are two very important concepts in engineering design and other policy considerations. Although seismic hazard and risk have often been used inter-changeably, they are fundamentally different. Furthermore, seismic risk is more important in engineering design and other policy considerations. Seismic hazard assessment is an effort by earth scientists to quantify seismic hazard and its associated uncertainty in time and space and to provide seismic hazard estimates for seismic risk assessment and other applications. Although seismic hazard assessment is more a scientific issue, it deserves special attention because of its significant implication to society. Two approaches, probabilistic seismic hazard analysis (PSHA) and deterministic seismic hazard analysis (DSHA), are commonly used for seismic hazard assessment. Although PSHA has been pro-claimed as the best approach for seismic hazard assessment, it is scientifically flawed (i.e., the physics and mathematics that PSHA is based on are not valid). Use of PSHA could lead to either unsafe or overly conservative engineering design or public policy, each of which has dire consequences to society. On the other hand, DSHA is a viable approach for seismic hazard assessment even though it has been labeled as unreliable. The biggest drawback of DSHA is that the temporal characteristics (i.e., earthquake frequency of occurrence and the associated uncertainty) are often neglected. An alternative, seismic hazard analysis (SHA), utilizes earthquake science and statistics directly and provides a seismic hazard estimate that can be readily used for seismic risk assessment and other applications. ?? 2010 Springer Basel AG.

  17. ASIL determination for motorbike's Electronics Throttle Control System (ETCS) mulfunction

    NASA Astrophysics Data System (ADS)

    Zaman Rokhani, Fakhrul; Rahman, Muhammad Taqiuddin Abdul; Ain Kamsani, Noor; Sidek, Roslina Mohd; Saripan, M. Iqbal; Samsudin, Khairulmizam; Khair Hassan, Mohd

    2017-11-01

    Electronics Throttle Control System (ETCS) is the principal electronic unit in all fuel injection engine motorbike, augmenting the engine performance efficiency in comparison to the conventional carburetor based engine. ETCS is regarded as a safety-critical component, whereby ETCS malfunction can cause unintended acceleration or deceleration event, which can be hazardous to riders. In this study, Hazard Analysis and Risk Assessment, an ISO26262 functional safety standard analysis has been applied on motorbike's ETCS to determine the required automotive safety integrity level. Based on the analysis, the established automotive safety integrity level can help to derive technical and functional safety measures for ETCS development.

  18. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal; Stutts, Richard; Huang, Zhaofeng

    2015-01-01

    Since the Space Shuttle Challenger accident in 1986, NASA has extensively used probabilistic analysis methods to assess, understand, and communicate the risk of space launch vehicles. Probabilistic Risk Assessment (PRA), used in the nuclear industry, is one of the probabilistic analysis methods NASA utilizes to assess Loss of Mission (LOM) and Loss of Crew (LOC) risk for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability distributions to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: 1) what can go wrong that would lead to loss or degraded performance (i.e., scenarios involving undesired consequences of interest), 2) how likely is it (probabilities), and 3) what is the severity of the degradation (consequences). Since the Challenger accident, PRA has been used in supporting decisions regarding safety upgrades for launch vehicles. Another area that was given a lot of emphasis at NASA after the Challenger accident is reliability engineering. Reliability engineering has been a critical design function at NASA since the early Apollo days. However, after the Challenger accident, quantitative reliability analysis and reliability predictions were given more scrutiny because of their importance in understanding failure mechanism and quantifying the probability of failure, which are key elements in resolving technical issues, performing design trades, and implementing design improvements. Although PRA and reliability are both probabilistic in nature and, in some cases, use the same tools, they are two different activities. Specifically, reliability engineering is a broad design discipline that deals with loss of function and helps understand failure mechanism and improve component and system design. PRA is a system scenario based risk assessment process intended to assess the risk scenarios that could lead to a major/top undesirable system event, and to identify those scenarios that are high-risk drivers. PRA output is critical to support risk informed decisions concerning system design. This paper describes the PRA process and the reliability engineering discipline in detail. It discusses their differences and similarities and how they work together as complementary analyses to support the design and risk assessment processes. Lessons learned, applications, and case studies in both areas are also discussed in the paper to demonstrate and explain these differences and similarities.

  19. David Snowberg | NREL

    Science.gov Websites

    the management of blade test projects. He also works with the NREL Marine Hydrokinetic group in the areas of risk management and failure analysis. David is a registered Professional Engineer in Arizona within the discipline of mechanical engineering; he is also a certified Project Management Professional

  20. Market-implied spread for earthquake CAT bonds: financial implications of engineering decisions.

    PubMed

    Damnjanovic, Ivan; Aslan, Zafer; Mander, John

    2010-12-01

    In the event of natural and man-made disasters, owners of large-scale infrastructure facilities (assets) need contingency plans to effectively restore the operations within the acceptable timescales. Traditionally, the insurance sector provides the coverage against potential losses. However, there are many problems associated with this traditional approach to risk transfer including counterparty risk and litigation. Recently, a number of innovative risk mitigation methods, termed alternative risk transfer (ART) methods, have been introduced to address these problems. One of the most important ART methods is catastrophe (CAT) bonds. The objective of this article is to develop an integrative model that links engineering design parameters with financial indicators including spread and bond rating. The developed framework is based on a four-step structural loss model and transformed survival model to determine expected excess returns. We illustrate the framework for a seismically designed bridge using two unique CAT bond contracts. The results show a nonlinear relationship between engineering design parameters and market-implied spread. © 2010 Society for Risk Analysis.

  1. Risk Assessment on Constructors during Over-water Riprap Based on Entropy Weight and FAHP

    NASA Astrophysics Data System (ADS)

    Wu, Tongqing; Li, Liang; Liang, Zelong; Mao, Tian; Shao, Weifeng

    2017-07-01

    Being aimed at waterway regulation engineering, there exist risks of over-water riprap for constructors which keeps uncertainty and complexity. For the purpose of evaluating the possibility and consequence, this paper utilizes fuzzy analytic hierarchy process with abbreviation of FAHP to do empowerment on the related risk indicators, constructs FAHP under entropy weight and establishes relevant evaluation factor set and evaluation language for constructors during over-water riprap construction process. Through doing risk probability estimation and risk consequence size evaluation on the factor of constructors, this paper introduces this model into risk analysis on constructors during over-water riprap of Ching River waterway regulation project. Results show that evaluation of this method is so credible that it could be utilized in practical engineering.

  2. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Technical Reports Server (NTRS)

    Monell, Donald W.; Piland, William M.

    1999-01-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g. manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.

  3. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Astrophysics Data System (ADS)

    Monell, Donald W.; Piland, William M.

    2000-07-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often led to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.

  4. Novel Threat-risk Index Using Probabilistic Risk Assessment and Human Reliability Analysis - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George A. Beitel

    2004-02-01

    In support of a national need to improve the current state-of-the-art in alerting decision makers to the risk of terrorist attack, a quantitative approach employing scientific and engineering concepts to develop a threat-risk index was undertaken at the Idaho National Engineering and Environmental Laboratory (INEEL). As a result of this effort, a set of models has been successfully integrated into a single comprehensive model known as Quantitative Threat-Risk Index Model (QTRIM), with the capability of computing a quantitative threat-risk index on a system level, as well as for the major components of the system. Such a threat-risk index could providemore » a quantitative variant or basis for either prioritizing security upgrades or updating the current qualitative national color-coded terrorist threat alert.« less

  5. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang

    2015-01-01

    PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will perform its intended function(s) for a specified mission profile. In general, the reliability metric can be calculated through the analyses using reliability demonstration and reliability prediction methodologies. Reliability analysis is very critical for understanding component failure mechanisms and in identifying reliability critical design and process drivers. The following sections discuss the PRA process and reliability engineering in detail and provide an application where reliability analysis and PRA were jointly used in a complementary manner to support a Space Shuttle flight risk assessment.

  6. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  7. Practical Techniques for Modeling Gas Turbine Engine Performance

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  8. Automotive Stirling reference engine design report

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The reference Stirling engine system is described which provides the best possible fuel economy while meeting or exceeding all other program objectives. The system was designed to meet the requirements of a 1984 Pontiac Phoenix (X-body). This design utilizes all new technology that can reasonably be expected to be developed by 1984 and that is judged to provide significant improvement, relative to development risk and cost. Topics covered include: (1) external heat system; (2) hot engine system; (3) cold engine system; (4) engine drive system; (5) power control system and auxiliaries; (6) engine instalation; (7) optimization and vehicle simulation; (8) engine materials; and (9) production cost analysis.

  9. An Extreme-Value Approach to Anomaly Vulnerability Identification

    NASA Technical Reports Server (NTRS)

    Everett, Chris; Maggio, Gaspare; Groen, Frank

    2010-01-01

    The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.

  10. Modeling approaches for characterizing and evaluating environmental exposure to engineered nanomaterials in support of risk-based decision making.

    PubMed

    Hendren, Christine Ogilvie; Lowry, Michael; Grieger, Khara D; Money, Eric S; Johnston, John M; Wiesner, Mark R; Beaulieu, Stephen M

    2013-02-05

    As the use of engineered nanomaterials becomes more prevalent, the likelihood of unintended exposure to these materials also increases. Given the current scarcity of experimental data regarding fate, transport, and bioavailability, determining potential environmental exposure to these materials requires an in depth analysis of modeling techniques that can be used in both the near- and long-term. Here, we provide a critical review of traditional and emerging exposure modeling approaches to highlight the challenges that scientists and decision-makers face when developing environmental exposure and risk assessments for nanomaterials. We find that accounting for nanospecific properties, overcoming data gaps, realizing model limitations, and handling uncertainty are key to developing informative and reliable environmental exposure and risk assessments for engineered nanomaterials. We find methods suited to recognizing and addressing significant uncertainty to be most appropriate for near-term environmental exposure modeling, given the current state of information and the current insufficiency of established deterministic models to address environmental exposure to engineered nanomaterials.

  11. Microeconomic analysis of military aircraft bearing restoration

    NASA Technical Reports Server (NTRS)

    Hein, G. F.

    1976-01-01

    The risk and cost of a bearing restoration by grinding program was analyzed. A microeconomic impact analysis was performed. The annual cost savings to U.S. Army aviation is approximately $950,000.00 for three engines and three transmissions. The capital value over an indefinite life is approximately ten million dollars. The annual cost savings for U.S. Air Force engines are approximately $313,000.00 with a capital value of approximately 3.1 million dollars.

  12. Prediction of 10-year coronary heart disease risk in Caribbean type 2 diabetic patients using the UKPDS risk engine.

    PubMed

    Ezenwaka, C E; Nwagbara, E; Seales, D; Okali, F; Hussaini, S; Raja, Bn; Jones-LeCointe, A; Sell, H; Avci, H; Eckel, J

    2009-03-06

    Primary prevention of Coronary Heart Disease (CHD) in diabetic patients should be based on absolute CHD risk calculation. This study was aimed to determine the levels of 10-year CHD risk in Caribbean type 2 diabetic patients using the diabetes specific United Kingdom Prospective Diabetes Study (UKPDS) risk engine calculator. Three hundred and twenty-five (106 males, 219 females) type 2 diabetic patients resident in two Caribbean Islands of Tobago and Trinidad met the UKPDS risk engine inclusion criteria. Records of their sex, age, ethnicity, smoking habit, diabetes duration, systolic blood pressure, total cholesterol, HDL-cholesterol and glycated haemoglobin were entered into the UKPDS risk engine calculator programme and the absolute 10-year CHD and stroke risk levels were computed. The 10-year CHD and stroke risks were statistically stratified into <15%, 15-30% and >30% CHD risk levels and differences between patients of African and Asian-Indian origin were compared. In comparison with patients in Tobago, type 2 diabetic patients in Trinidad, irrespective of gender, had higher proportion of 10-year CHD risk (10.4 vs. 23.6%, P<0.001) whereas the overall 10-year stroke risk prediction was higher in patients resident in Tobago (16.9 vs. 11.4%, P<0.001). Ethnicity-based analysis revealed that irrespective of gender, higher proportion of patients of Indian origin scored >30% of absolute 10-year CHD risk compared with patients of African descent (3.2 vs. 28.2%, P<0.001). The results of the study identified diabetic patients resident in Trinidad and patients of Indian origin as the most vulnerable groups for CHD. These groups of diabetic patients should have priority in primary or secondary prevention of coronary heart disease.

  13. Commercialising genetically engineered animal biomedical products.

    PubMed

    Sullivan, Eddie J; Pommer, Jerry; Robl, James M

    2008-01-01

    Research over the past two decades has increased the quality and quantity of tools available to produce genetically engineered animals. The number of potentially viable biomedical products from genetically engineered animals is increasing. However, moving from cutting-edge research to development and commercialisation of a biomedical product that is useful and wanted by the public has significant challenges. Even early stage development of genetically engineered animal applications requires consideration of many steps, including quality assurance and quality control, risk management, gap analysis, founder animal establishment, cell banking, sourcing of animals and animal-derived material, animal facilities, product collection facilities and processing facilities. These steps are complicated and expensive. Biomedical applications of genetically engineered animals have had some recent successes and many applications are well into development. As researchers consider applications for their findings, having a realistic understanding of the steps involved in the development and commercialisation of a product, produced in genetically engineered animals, is useful in determining the risk of genetic modification to the animal nu. the potential public benefit of the application.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.; McCorkle, D.; Yang, C.

    Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less

  15. A hierarchical-multiobjective framework for risk management

    NASA Technical Reports Server (NTRS)

    Haimes, Yacov Y.; Li, Duan

    1991-01-01

    A broad hierarchical-multiobjective framework is established and utilized to methodologically address the management of risk. United into the framework are the hierarchical character of decision-making, the multiple decision-makers at separate levels within the hierarchy, the multiobjective character of large-scale systems, the quantitative/empirical aspects, and the qualitative/normative/judgmental aspects. The methodological components essentially consist of hierarchical-multiobjective coordination, risk of extreme events, and impact analysis. Examples of applications of the framework are presented. It is concluded that complex and interrelated forces require an analysis of trade-offs between engineering analysis and societal preferences, as in the hierarchical-multiobjective framework, to successfully address inherent risk.

  16. Space Transportation System Liftoff Debris Mitigation Process Overview

    NASA Technical Reports Server (NTRS)

    Mitchell, Michael; Riley, Christopher

    2011-01-01

    Liftoff debris is a top risk to the Space Shuttle Vehicle. To manage the Liftoff debris risk, the Space Shuttle Program created a team with in the Propulsion Systems Engineering & Integration Office. The Shutt le Liftoff Debris Team harnesses the Systems Engineering process to i dentify, assess, mitigate, and communicate the Liftoff debris risk. T he Liftoff Debris Team leverages off the technical knowledge and expe rtise of engineering groups across multiple NASA centers to integrate total system solutions. These solutions connect the hardware and ana lyses to identify and characterize debris sources and zones contribut ing to the Liftoff debris risk. The solutions incorporate analyses sp anning: the definition and modeling of natural and induced environmen ts; material characterizations; statistical trending analyses, imager y based trajectory analyses; debris transport analyses, and risk asse ssments. The verification and validation of these analyses are bound by conservative assumptions and anchored by testing and flight data. The Liftoff debris risk mitigation is managed through vigilant collab orative work between the Liftoff Debris Team and Launch Pad Operation s personnel and through the management of requirements, interfaces, r isk documentation, configurations, and technical data. Furthermore, o n day of launch, decision analysis is used to apply the wealth of ana lyses to case specific identified risks. This presentation describes how the Liftoff Debris Team applies Systems Engineering in their proce sses to mitigate risk and improve the safety of the Space Shuttle Veh icle.

  17. Introducing Risk Analysis and Calculation of Profitability under Uncertainty in Engineering Design

    ERIC Educational Resources Information Center

    Kosmopoulou, Georgia; Freeman, Margaret; Papavassiliou, Dimitrios V.

    2011-01-01

    A major challenge that chemical engineering graduates face at the modern workplace is the management and operation of plants under conditions of uncertainty. Developments in the fields of industrial organization and microeconomics offer tools to address this challenge with rather well developed concepts, such as decision theory and financial risk…

  18. Analysis of KC-46 Live-Fire Risk Mitigation Program Testing

    DTIC Science & Technology

    2012-03-01

    the use of real hardware such as electrohydraulic actuators , electrical units, and converter regulators (Andrus, 2010). The only feasible method for...worked with the MQ-9 as a test engineer and analyst for the programs IOT &E, RQ-4 as lead engineer and program lead for the block 3 and the block 4

  19. Integrating Human Factors into Space Vehicle Processing for Risk Management

    NASA Technical Reports Server (NTRS)

    Woodbury, Sarah; Richards, Kimberly J.

    2008-01-01

    This presentation will discuss the multiple projects performed in United Space Alliance's Human Engineering Modeling and Performance (HEMAP) Lab, improvements that resulted from analysis, and the future applications of the HEMAP Lab for risk assessment by evaluating human/machine interaction and ergonomic designs.

  20. Ergonomic initiatives at Inmetro: measuring occupational health and safety.

    PubMed

    Drucker, L; Amaral, M; Carvalheira, C

    2012-01-01

    This work studies biomechanical hazards to which the workforce of Instituto Nacional de Metrologia, Qualidade e Tecnologia Industrial (Inmetro) is exposed. It suggests a model for ergonomic evaluation of work, based on the concepts of resilience engineering which take into consideration the institute's ability to manage risk and deal with its consequences. Methodology includes the stages of identification, inventory, analysis, and risk management. Diagnosis of the workplace uses as parameters the minimal criteria stated in Brazilian legislation. The approach has several prospectives and encompasses the points of view of public management, safety engineering, physical therapy and ergonomics-oriented design. The suggested solution integrates all aspects of the problem: biological, psychological, sociological and organizational. Results obtained from a pilot Project allow to build a significant sample of Inmetro's workforce, identifying problems and validating the methodology employed as a tool to be applied to the whole institution. Finally, this work intends to draw risk maps and support goals and methods based on resiliency engineering to assess environmental and ergonomic risk management.

  1. The Systems Engineering Process for Human Support Technology Development

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2005-01-01

    Systems engineering is designing and optimizing systems. This paper reviews the systems engineering process and indicates how it can be applied in the development of advanced human support systems. Systems engineering develops the performance requirements, subsystem specifications, and detailed designs needed to construct a desired system. Systems design is difficult, requiring both art and science and balancing human and technical considerations. The essential systems engineering activity is trading off and compromising between competing objectives such as performance and cost, schedule and risk. Systems engineering is not a complete independent process. It usually supports a system development project. This review emphasizes the NASA project management process as described in NASA Procedural Requirement (NPR) 7120.5B. The process is a top down phased approach that includes the most fundamental activities of systems engineering - requirements definition, systems analysis, and design. NPR 7120.5B also requires projects to perform the engineering analyses needed to ensure that the system will operate correctly with regard to reliability, safety, risk, cost, and human factors. We review the system development project process, the standard systems engineering design methodology, and some of the specialized systems analysis techniques. We will discuss how they could apply to advanced human support systems development. The purpose of advanced systems development is not directly to supply human space flight hardware, but rather to provide superior candidate systems that will be selected for implementation by future missions. The most direct application of systems engineering is in guiding the development of prototype and flight experiment hardware. However, anticipatory systems engineering of possible future flight systems would be useful in identifying the most promising development projects.

  2. Modeling of Commercial Turbofan Engine With Ice Crystal Ingestion: Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  3. Modeling of Commercial Turbofan Engine with Ice Crystal Ingestion; Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  4. Design Analysis Kit for Optimization and Terascale Applications 6.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to: (1) enhance understanding of risk, (2) improve products, and (3) assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a computational model. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verificationmore » and validation activities. The algorithms implemented in Dakota aim to address challenges in performing these analyses with complex science and engineering models from desktop to high performance computers.« less

  5. Cost Risk Analysis Based on Perception of the Engineering Process

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.

    1986-01-01

    In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.

  6. Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Sullivan, T. J.

    2012-04-01

    The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.

  7. Constellation Program (CxP) Crew Exploration Vehicle (CEV) Project Integrated Landing System

    NASA Technical Reports Server (NTRS)

    Baker, John D.; Yuchnovicz, Daniel E.; Eisenman, David J.; Peer, Scott G.; Fasanella, Edward L.; Lawrence, Charles

    2009-01-01

    Crew Exploration Vehicle (CEV) Chief Engineer requested a risk comparison of the Integrated Landing System design developed by NASA and the design developed by Contractor- referred to as the LM 604 baseline. Based on the results of this risk comparison, the CEV Chief engineer requested that the NESC evaluate identified risks and develop strategies for their reduction or mitigation. The assessment progressed in two phases. A brief Phase I analysis was performed by the Water versus Land-Landing Team to compare the CEV Integrated Landing System proposed by the Contractor against the NASA TS-LRS001 baseline with respect to risk. A phase II effort examined the areas of critical importance to the overall landing risk, evaluating risk to the crew and to the CEV Crew Module (CM) during a nominal land-landing. The findings of the assessment are contained in this report.

  8. Risk analysis and management

    NASA Technical Reports Server (NTRS)

    Smith, H. E.

    1990-01-01

    Present software development accomplishments are indicative of the emerging interest in and increasing efforts to provide risk assessment backbone tools in the manned spacecraft engineering community. There are indications that similar efforts are underway in the chemical processes industry and are probably being planned for other high risk ground base environments. It appears that complex flight systems intended for extended manned planetary exploration will drive this technology.

  9. Risk Management using Dependency Stucture Matrix

    NASA Astrophysics Data System (ADS)

    Petković, Ivan

    2011-09-01

    An efficient method based on dependency structure matrix (DSM) analysis is given for ranking risks in a complex system or process whose entities are mutually dependent. This rank is determined according to the element's values of the unique positive eigenvector which corresponds to the matrix spectral radius modeling the considered engineering system. For demonstration, the risk problem of NASA's robotic spacecraft is analyzed.

  10. Development of Management Methodology for Engineering Production Quality

    NASA Astrophysics Data System (ADS)

    Gorlenko, O.; Miroshnikov, V.; Borbatc, N.

    2016-04-01

    The authors of the paper propose four directions of the methodology developing the quality management of engineering products that implement the requirements of new international standard ISO 9001:2015: the analysis of arrangement context taking into account stakeholders, the use of risk management, management of in-house knowledge, assessment of the enterprise activity according to the criteria of effectiveness

  11. Failure environment analysis tool applications

    NASA Astrophysics Data System (ADS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-02-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  12. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  13. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1994-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within it, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  14. Environmental Engineering in the Slovak Republic

    NASA Astrophysics Data System (ADS)

    Stevulova, N.; Balintova, M.; Zelenakova, M.; Estokova, A.; Vilcekova, S.

    2017-10-01

    The fundamental role of environmental engineering is to protect human population and environment from impacts of human activities and to ensure environmental quality. It relates to achieving the environmental sustainability goals through advanced technologies for pollutants removing from air, water and soil in order to minimize risk in ecosystem and ensuring favourable conditions for life of humans and organisms. Nowadays, a critical analysis of the environment quality and innovative approaches to problem solving in order to achieve sustainability in environmental engineering, are necessary. This article presents an overview of the quality of the environment and progress in environmental engineering in Slovakia and gives information regarding the environmental engineering education at Faculty of Civil Engineering at Technical University in Kosice.

  15. Computational Infrastructure for Engine Structural Performance Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.

  16. Rocket Engine Nozzle Side Load Transient Analysis Methodology: A Practical Approach

    NASA Technical Reports Server (NTRS)

    Shi, John J.

    2005-01-01

    During the development stage, in order to design/to size the rocket engine components and to reduce the risks, the local dynamic environments as well as dynamic interface loads must be defined. There are two kinds of dynamic environment, i.e. shock transients and steady-state random and sinusoidal vibration environments. Usually, the steady-state random and sinusoidal vibration environments are scalable, but the shock environments are not scalable. In other words, based on similarities only random vibration environments can be defined for a new engine. The methodology covered in this paper provides a way to predict the shock environments and the dynamic loads for new engine systems and new engine components in the early stage of new engine development or engine nozzle modifications.

  17. Cost/benefit analysis of advanced materials technology candidates for the 1980's, part 2

    NASA Technical Reports Server (NTRS)

    Dennis, R. E.; Maertins, H. F.

    1980-01-01

    Cost/benefit analyses to evaluate advanced material technologies projects considered for general aviation and turboprop commuter aircraft through estimated life-cycle costs, direct operating costs, and development costs are discussed. Specifically addressed is the selection of technologies to be evaluated; development of property goals; assessment of candidate technologies on typical engines and aircraft; sensitivity analysis of the changes in property goals on performance and economics, cost, and risk analysis for each technology; and ranking of each technology by relative value. The cost/benefit analysis was applied to a domestic, nonrevenue producing, business-type jet aircraft configured with two TFE731-3 turbofan engines, and to a domestic, nonrevenue producing, business type turboprop aircraft configured with two TPE331-10 turboprop engines. In addition, a cost/benefit analysis was applied to a commercial turboprop aircraft configured with a growth version of the TPE331-10.

  18. Distinguishing nanomaterial particles from background airborne particulate matter for quantitative exposure assessment

    NASA Astrophysics Data System (ADS)

    Ono-Ogasawara, Mariko; Serita, Fumio; Takaya, Mitsutoshi

    2009-10-01

    As the production of engineered nanomaterials quantitatively expands, the chance that workers involved in the manufacturing process will be exposed to nanoparticles also increases. A risk management system is needed for workplaces in the nanomaterial industry based on the precautionary principle. One of the problems in the risk management system is difficulty of exposure assessment. In this article, examples of exposure assessment in nanomaterial industries are reviewed with a focus on distinguishing engineered nanomaterial particles from background nanoparticles in workplace atmosphere. An approach by JNIOSH (Japan National Institute of Occupational Safety and Health) to quantitatively measure exposure to carbonaceous nanomaterials is also introduced. In addition to real-time measurements and qualitative analysis by electron microscopy, quantitative chemical analysis is necessary for quantitatively assessing exposure to nanomaterials. Chemical analysis is suitable for quantitative exposure measurement especially at facilities with high levels of background NPs.

  19. Inspection planning development: An evolutionary approach using reliability engineering as a tool

    NASA Technical Reports Server (NTRS)

    Graf, David A.; Huang, Zhaofeng

    1994-01-01

    This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.

  20. An Assessment of the Effectiveness of Air Force Risk Management Practices in Program Acquisition Using Survey Instrument Analysis

    DTIC Science & Technology

    2015-06-18

    Engineering Effectiveness Survey. CMU/SEI-2012-SR-009. Carnegie Mellon University. November 2012. Field, Andy. Discovering Statistics Using SPSS , 3rd...enough into the survey to begin answering questions on risk practices. All of the data statistical analysis will be performed using SPSS . Prior to...probabilistically using distributions for likelihood and impact. Statistical methods like Monte Carlo can more comprehensively evaluate the cost and

  1. Occupational exposure to diesel engine emissions and risk of lung cancer: evidence from two case-control studies in Montreal, Canada.

    PubMed

    Pintos, Javier; Parent, Marie-Elise; Richardson, Lesley; Siemiatycki, Jack

    2012-11-01

    To examine the risk of lung cancer among men associated with exposure to diesel engine emissions incurred in a wide range of occupations and industries. 2 population-based lung cancer case-control studies were conducted in Montreal. Study I (1979-1986) comprised 857 cases and 533 population controls; study II (1996-2001) comprised 736 cases and 894 population controls. A detailed job history was obtained, from which we inferred lifetime occupational exposure to 294 agents, including diesel engine emissions. ORs were estimated for each study and in the pooled data set, adjusting for socio-demographic factors, smoking history and selected occupational carcinogens. While it proved impossible to retrospectively estimate absolute exposure concentrations, there were estimates and analyses by relative measures of cumulative exposure. Increased risks of lung cancer were found in both studies. The pooled analysis showed an OR of lung cancer associated with substantial exposure to diesel exhaust of 1.80 (95% CI 1.3 to 2.6). The risk associated with substantial exposure was higher for squamous cell carcinomas (OR 2.09; 95% CI 1.3 to 3.2) than other histological types. Joint effects between diesel exhaust exposure and tobacco smoking are compatible with a multiplicative synergistic effect. Our findings provide further evidence supporting a causal link between diesel engine emissions and risk of lung cancer. The risk is stronger for the development of squamous cell carcinomas than for small cell tumours or adenocarcinomas.

  2. Problem formulation and option assessment (PFOA) linking governance and environmental risk assessment for technologies: a methodology for problem analysis of nanotechnologies and genetically engineered organisms.

    PubMed

    Nelson, Kristen C; Andow, David A; Banker, Michael J

    2009-01-01

    Societal evaluation of new technologies, specifically nanotechnology and genetically engineered organisms (GEOs), challenges current practices of governance and science. Employing environmental risk assessment (ERA) for governance and oversight assumes we have a reasonable ability to understand consequences and predict adverse effects. However, traditional ERA has come under considerable criticism for its many shortcomings and current governance institutions have demonstrated limitations in transparency, public input, and capacity. Problem Formulation and Options Assessment (PFOA) is a methodology founded on three key concepts in risk assessment (science-based consideration, deliberation, and multi-criteria analysis) and three in governance (participation, transparency, and accountability). Developed through a series of international workshops, the PFOA process emphasizes engagement with stakeholders in iterative stages, from identification of the problem(s) through comparison of multiple technology solutions that could be used in the future with their relative benefits, harms, and risk. It provides "upstream public engagement" in a deliberation informed by science that identifies values for improved decision making.

  3. Introducing Risk Management Techniques Within Project Based Software Engineering Courses

    NASA Astrophysics Data System (ADS)

    Port, Daniel; Boehm, Barry

    2002-03-01

    In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.

  4. Engineering risk assessment for emergency disposal projects of sudden water pollution incidents.

    PubMed

    Shi, Bin; Jiang, Jiping; Liu, Rentao; Khan, Afed Ullah; Wang, Peng

    2017-06-01

    Without an engineering risk assessment for emergency disposal in response to sudden water pollution incidents, responders are prone to be challenged during emergency decision making. To address this gap, the concept and framework of emergency disposal engineering risks are reported in this paper. The proposed risk index system covers three stages consistent with the progress of an emergency disposal project. Fuzzy fault tree analysis (FFTA), a logical and diagrammatic method, was developed to evaluate the potential failure during the process of emergency disposal. The probability of basic events and their combination, which caused the failure of an emergency disposal project, were calculated based on the case of an emergency disposal project of an aniline pollution incident in the Zhuozhang River, Changzhi, China, in 2014. The critical events that can cause the occurrence of a top event (TE) were identified according to their contribution. Finally, advices on how to take measures using limited resources to prevent the failure of a TE are given according to the quantified results of risk magnitude. The proposed approach could be a potential useful safeguard for the implementation of an emergency disposal project during the process of emergency response.

  5. An Agent-Based Model of Evolving Community Flood Risk.

    PubMed

    Tonn, Gina L; Guikema, Seth D

    2018-06-01

    Although individual behavior plays a major role in community flood risk, traditional flood risk models generally do not capture information on how community policies and individual decisions impact the evolution of flood risk over time. The purpose of this study is to improve the understanding of the temporal aspects of flood risk through a combined analysis of the behavioral, engineering, and physical hazard aspects of flood risk. Additionally, the study aims to develop a new modeling approach for integrating behavior, policy, flood hazards, and engineering interventions. An agent-based model (ABM) is used to analyze the influence of flood protection measures, individual behavior, and the occurrence of floods and near-miss flood events on community flood risk. The ABM focuses on the following decisions and behaviors: dissemination of flood management information, installation of community flood protection, elevation of household mechanical equipment, and elevation of homes. The approach is place based, with a case study area in Fargo, North Dakota, but is focused on generalizable insights. Generally, community mitigation results in reduced future damage, and individual action, including mitigation and movement into and out of high-risk areas, can have a significant influence on community flood risk. The results of this study provide useful insights into the interplay between individual and community actions and how it affects the evolution of flood risk. This study lends insight into priorities for future work, including the development of more in-depth behavioral and decision rules at the individual and community level. © 2017 Society for Risk Analysis.

  6. Instability risk analysis and risk assessment system establishment of underground storage caverns in bedded salt rock

    NASA Astrophysics Data System (ADS)

    Jing, Wenjun; Zhao, Yan

    2018-02-01

    Stability is an important part of geotechnical engineering research. The operating experiences of underground storage caverns in salt rock all around the world show that the stability of the caverns is the key problem of safe operation. Currently, the combination of theoretical analysis and numerical simulation are the mainly adopts method of reserve stability analysis. This paper introduces the concept of risk into the stability analysis of underground geotechnical structure, and studies the instability of underground storage cavern in salt rock from the perspective of risk analysis. Firstly, the definition and classification of cavern instability risk is proposed, and the damage mechanism is analyzed from the mechanical angle. Then the main stability evaluating indicators of cavern instability risk are proposed, and an evaluation method of cavern instability risk is put forward. Finally, the established cavern instability risk assessment system is applied to the analysis and prediction of cavern instability risk after 30 years of operation in a proposed storage cavern group in the Huai’an salt mine. This research can provide a useful theoretical base for the safe operation and management of underground storage caverns in salt rock.

  7. Risk factors of jet fuel combustion products.

    PubMed

    Tesseraux, Irene

    2004-04-01

    Air travel is increasing and airports are being newly built or enlarged. Concern is rising about the exposure to toxic combustion products in the population living in the vicinity of large airports. Jet fuels are well characterized regarding their physical and chemical properties. Health effects of fuel vapors and liquid fuel are described after occupational exposure and in animal studies. Rather less is known about combustion products of jet fuels and exposure to those. Aircraft emissions vary with the engine type, the engine load and the fuel. Among jet aircrafts there are differences between civil and military jet engines and their fuels. Combustion of jet fuel results in CO2, H2O, CO, C, NOx, particles and a great number of organic compounds. Among the emitted hydrocarbons (HCs), no compound (indicator) characteristic for jet engines could be detected so far. Jet engines do not seem to be a source of halogenated compounds or heavy metals. They contain, however, various toxicologically relevant compounds including carcinogenic substances. A comparison between organic compounds in the emissions of jet engines and diesel vehicle engines revealed no major differences in the composition. Risk factors of jet engine fuel exhaust can only be named in context of exposure data. Using available monitoring data, the possibilities and limitations for a risk assessment approach for the population living around large airports are presented. The analysis of such data shows that there is an impact on the air quality of the adjacent communities, but this impact does not result in levels higher than those in a typical urban environment.

  8. Development of Rock Engineering Systems-Based Models for Flyrock Risk Analysis and Prediction of Flyrock Distance in Surface Blasting

    NASA Astrophysics Data System (ADS)

    Faramarzi, Farhad; Mansouri, Hamid; Farsangi, Mohammad Ali Ebrahimi

    2014-07-01

    The environmental effects of blasting must be controlled in order to comply with regulatory limits. Because of safety concerns and risk of damage to infrastructures, equipment, and property, and also having a good fragmentation, flyrock control is crucial in blasting operations. If measures to decrease flyrock are taken, then the flyrock distance would be limited, and, in return, the risk of damage can be reduced or eliminated. This paper deals with modeling the level of risk associated with flyrock and, also, flyrock distance prediction based on the rock engineering systems (RES) methodology. In the proposed models, 13 effective parameters on flyrock due to blasting are considered as inputs, and the flyrock distance and associated level of risks as outputs. In selecting input data, the simplicity of measuring input data was taken into account as well. The data for 47 blasts, carried out at the Sungun copper mine, western Iran, were used to predict the level of risk and flyrock distance corresponding to each blast. The obtained results showed that, for the 47 blasts carried out at the Sungun copper mine, the level of estimated risks are mostly in accordance with the measured flyrock distances. Furthermore, a comparison was made between the results of the flyrock distance predictive RES-based model, the multivariate regression analysis model (MVRM), and, also, the dimensional analysis model. For the RES-based model, R 2 and root mean square error (RMSE) are equal to 0.86 and 10.01, respectively, whereas for the MVRM and dimensional analysis, R 2 and RMSE are equal to (0.84 and 12.20) and (0.76 and 13.75), respectively. These achievements confirm the better performance of the RES-based model over the other proposed models.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. VanHorn; N. L. Hampton; R. C. Morris

    This document presents reference material for conducting screening level ecological risk assessments (SLERAs)for the waste area groups (WAGs) at the Idaho National Engineering Laboratory. Included in this document are discussions of the objectives of and processes for conducting SLERAs. The Environmental Protection Agency ecological risk assessment framework is closely followed. Guidance for site characterization, stressor characterization, ecological effects, pathways of contaminant migration, the conceptual site model, assessment endpoints, measurement endpoints, analysis guidance, and risk characterization are included.

  10. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  11. Assessment of heavy metals in tilapia fish (Oreochromis niloticus) from the Langat River and Engineering Lake in Bangi, Malaysia, and evaluation of the health risk from tilapia consumption.

    PubMed

    Taweel, Abdulali; Shuhaimi-Othman, M; Ahmad, A K

    2013-07-01

    Concentrations of the heavy metals copper (Cu), cadmium (Cd), zinc (Zn), lead (Pb) and nickel (Ni) were determined in the liver, gills and muscles of tilapia fish from the Langat River and Engineering Lake, Bangi, Selangor, Malaysia. There were differences in the concentrations of the studied heavy metals between different organs and between sites. In the liver samples, Cu>Zn>Ni>Pb>Cd, and in the gills and muscle, Zn>Ni>Cu>Pb>Cd. Levels of Cu, Cd, Zn and Pb in the liver samples from Engineering Lake were higher than in those from the Langat River, whereas the Ni levels in the liver samples from the Langat River were greater than in those from Engineering Lake. Cd levels in the fish muscle from Engineering Lake were lower than in that from the Langat River. Meanwhile, the Cd, Zn and Pb levels in the fish muscle from the Langat River were lower than in that from Engineering Lake, and the Ni levels were almost the same in the fish muscle samples from the two sites. The health risks associated with Cu, Cd, Zn, Pb and Ni were assessed based on the target hazard quotients. In the Langat River, the risk from Cu is minimal compared to the other studied elements, and the concentrations of Pb and Ni were determined to pose the greatest risk. The maximum allowable fish consumption rates (kg/d) based on Cu in Engineering Lake and the Langat River were 2.27 and 1.51 in December and 2.53 and 1.75 in February, respectively. The Cu concentrations resulted in the highest maximum allowable fish consumption rates compared with the other studied heavy metals, whereas those based on Pb were the lowest. A health risk analysis of the heavy metals measured in the fish muscle samples indicated that the fish can be classified at one of the safest levels for the general population and that there are no possible risks pertaining to tilapia fish consumption. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Cognitive engineering and health informatics: Applications and intersections.

    PubMed

    Hettinger, A Zachary; Roth, Emilie M; Bisantz, Ann M

    2017-03-01

    Cognitive engineering is an applied field with roots in both cognitive science and engineering that has been used to support design of information displays, decision support, human-automation interaction, and training in numerous high risk domains ranging from nuclear power plant control to transportation and defense systems. Cognitive engineering provides a set of structured, analytic methods for data collection and analysis that intersect with and complement methods of Cognitive Informatics. These methods support discovery of aspects of the work that make performance challenging, as well as the knowledge, skills, and strategies that experts use to meet those challenges. Importantly, cognitive engineering methods provide novel representations that highlight the inherent complexities of the work domain and traceable links between the results of cognitive analyses and actionable design requirements. This article provides an overview of relevant cognitive engineering methods, and illustrates how they have been applied to the design of health information technology (HIT) systems. Additionally, although cognitive engineering methods have been applied in the design of user-centered informatics systems, methods drawn from informatics are not typically incorporated into a cognitive engineering analysis. This article presents a discussion regarding ways in which data-rich methods can inform cognitive engineering. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William

    2009-01-01

    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  14. System Safety and the Unintended Consequence

    NASA Technical Reports Server (NTRS)

    Watson, Clifford

    2012-01-01

    The analysis and identification of risks often result in design changes or modification of operational steps. This paper identifies the potential of unintended consequences as an over-looked result of these changes. Examples of societal changes such as prohibition, regulatory changes including mandating lifeboats on passenger ships, and engineering proposals or design changes to automobiles and spaceflight hardware are used to demonstrate that the System Safety Engineer must be cognizant of the potential for unintended consequences as a result of an analysis. Conclusions of the report indicate the need for additional foresight and consideration of the potential effects of analysis-driven design, processing changes, and/or operational modifications.

  15. Safety analysis report for the use of hazardous production materials in photovoltaic applications at the National Renewable Energy Laboratory. Volume 2, Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crandall, R.S.; Nelson, B.P.; Moskowitz, P.D.

    1992-07-01

    To ensure the continued safety of SERI`s employees, the community, and the environment, NREL commissioned an internal audit of its photovoltaic operations that used hazardous production materials (HPMS). As a result of this audit, NREL management voluntarily suspended all operations using toxic and/or pyrophoric gases. This suspension affected seven laboratories and ten individual deposition systems. These activities are located in Building 16, which has a permitted occupancy of Group B, Division 2 (B-2). NREL management decided to do the following. (1) Exclude from this SAR all operations which conformed, or could easily be made to conform, to B-2 Occupancy requirements.more » (2) Include in this SAR all operations that could be made to conform to B-2 Occupancy requirements with special administrative and engineering controls. (3) Move all operations that could not practically be made to conform to B-2 occupancy requirements to alternate locations. In addition to the layered set of administrative and engineering controls set forth in this SAR, a semiquantitative risk analysis was performed on 30 various accident scenarios. Twelve presented only routine risks, while 18 presented low risks. Considering the demonstrated safe operating history of NREL in general and these systems specifically, the nature of the risks identified, and the layered set of administrative and engineering controls, it is clear that this facility falls within the DOE Low Hazard Class. Each operation can restart only after it has passed an Operational Readiness Review, comparing it to the requirements of this SAR, while subsequent safety inspections will ensure future compliance. This document contains the appendices to the NREL safety analysis report.« less

  16. Risk Management Technique for design and operation of facilities and equipment

    NASA Technical Reports Server (NTRS)

    Fedor, O. H.; Parsons, W. N.; Coutinho, J. De S.

    1975-01-01

    The Risk Management System collects information from engineering, operating, and management personnel to identify potentially hazardous conditions. This information is used in risk analysis, problem resolution, and contingency planning. The resulting hazard accountability system enables management to monitor all identified hazards. Data from this system are examined in project reviews so that management can decide to eliminate or accept these risks. This technique is particularly effective in improving the management of risks in large, complex, high-energy facilities. These improvements are needed for increased cooperation among industry, regulatory agencies, and the public.

  17. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  18. Bladder cancer and occupational exposure to diesel and gasoline engine emissions among Canadian men.

    PubMed

    Latifovic, Lidija; Villeneuve, Paul J; Parent, Marie-Élise; Johnson, Kenneth C; Kachuri, Linda; Harris, Shelley A

    2015-12-01

    The International Agency for Research on Cancer has classified diesel exhaust as a carcinogen based on lung cancer evidence; however, few studies have investigated the effect of engine emissions on bladder cancer. The purpose of this study was to investigate the association between occupational exposure to diesel and gasoline emissions and bladder cancer in men using data from the Canadian National Enhanced Cancer Surveillance System; a population-based case-control study. This analysis included 658 bladder cancer cases and 1360 controls with information on lifetime occupational histories and a large number of possible cancer risk factors. A job-exposure matrix for engine emissions was supplemented by expert review to assign values for each job across three dimensions of exposure: concentration, frequency, and reliability. Odds ratios (OR) and their corresponding 95% confidence intervals were estimated using logistic regression. Relative to unexposed, men ever exposed to high concentrations of diesel emissions were at an increased risk of bladder cancer (OR = 1.64, 0.87-3.08), but this result was not significant, and those with >10 years of exposure to diesel emissions at high concentrations had a greater than twofold increase in risk (OR = 2.45, 1.04-5.74). Increased risk of bladder cancer was also observed with >30% of work time exposed to gasoline engine emissions (OR = 1.59, 1.04-2.43) relative to the unexposed, but only among men that had never been exposed to diesel emissions. Taken together, our findings support the hypothesis that exposure to high concentrations of diesel engine emissions may increase the risk of bladder cancer. © 2015 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  19. [Carcinogenic effects of diesel emission: an epidemiological review].

    PubMed

    Szadkowska-Stańczyk, I; Ruszkowska, J

    2000-01-01

    The results of recent epidemiological studies and meta-analysis relating to carcinogenic effects of diesel emissions in exposed populations were reviewed. Statistical, but still not causal association between risk of lung cancer and occupational exposure to diesel emissions was found in a great number of studies under review. Long-term exposure to diesel exhausts (> 20 years) increases by 30-40% lung cancer risk in workers of the transport industry: truck drivers, diesel engine mechanics, locomotive engineers and brakesmen. The results are inconsistent among heavy equipment operators, bus drivers and miners. Relative risk of lung cancer among workers occupationally exposed to diesel emission may be comparable with that of environmental tobacco smoke. Further research is also needed in the area of carcinogenic mechanisms, and biomarkers of exposure should be developed and validated before reliable quantitative estimates of risk of harmful effects to the human health in occupational setting are made.

  20. Evaluation of Flooding Risk and Engineering Protection Against Floods for Ulan-Ude

    NASA Astrophysics Data System (ADS)

    Borisova, T. A.

    2017-11-01

    The report presents the results of the study on analysis and risk assessment in relation to floods for Ulan-Ude and provides the developed recommendations of the activities for engineering protection of the population and economic installations. The current situation is reviewed and the results of the site survey are shown to identify the challenges and areas of negative water influence along with the existing security system. The report presents a summary of floods and index risk assessment. The articles describes the scope of eventual flooding, underflooding and enumerates the economic installations inside the urban areas’ research-based zones of flooding at the rated levels of water to identify the likeliness of exceedance. The assessment of damage from flood equal to 1% is shown.

  1. Equipment management risk rating system based on engineering endpoints.

    PubMed

    James, P J

    1999-01-01

    The equipment management risk ratings system outlined here offers two significant departures from current practice: risk classifications are based on intrinsic device risks, and the risk rating system is based on engineering endpoints. Intrinsic device risks are categorized as physical, clinical and technical, and these flow from the incoming equipment assessment process. Engineering risk management is based on verification of engineering endpoints such as clinical measurements or energy delivery. This practice eliminates the ambiguity associated with ranking risk in terms of physiologic and higher-level outcome endpoints such as no significant hazards, low significance, injury, or mortality.

  2. Risk-trading in flood management: An economic model.

    PubMed

    Chang, Chiung Ting

    2017-09-15

    Although flood management is no longer exclusively a topic of engineering, flood mitigation continues to be associated with hard engineering options. Flood adaptation or the capacity to adapt to flood risk, as well as a demand for internalizing externalities caused by flood risk between regions, complicate flood management activities. Even though integrated river basin management has long been recommended to resolve the above issues, it has proven difficult to apply widely, and sometimes even to bring into existence. This article explores how internalization of externalities as well as the realization of integrated river basin management can be encouraged via the use of a market-based approach, namely a flood risk trading program. In addition to maintaining efficiency of optimal resource allocation, a flood risk trading program may also provide a more equitable distribution of benefits by facilitating decentralization. This article employs a graphical analysis to show how flood risk trading can be implemented to encourage mitigation measures that increase infiltration and storage capacity. A theoretical model is presented to demonstrate the economic conditions necessary for flood risk trading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Assessment of pleiotropic transcriptome perturbations in Arabidopsis engineered for indirect insect defence.

    PubMed

    Houshyani, Benyamin; van der Krol, Alexander R; Bino, Raoul J; Bouwmeester, Harro J

    2014-06-19

    Molecular characterization is an essential step of risk/safety assessment of genetically modified (GM) crops. Holistic approaches for molecular characterization using omics platforms can be used to confirm the intended impact of the genetic engineering, but can also reveal the unintended changes at the omics level as a first assessment of potential risks. The potential of omics platforms for risk assessment of GM crops has rarely been used for this purpose because of the lack of a consensus reference and statistical methods to judge the significance or importance of the pleiotropic changes in GM plants. Here we propose a meta data analysis approach to the analysis of GM plants, by measuring the transcriptome distance to untransformed wild-types. In the statistical analysis of the transcriptome distance between GM and wild-type plants, values are compared with naturally occurring transcriptome distances in non-GM counterparts obtained from a database. Using this approach we show that the pleiotropic effect of genes involved in indirect insect defence traits is substantially equivalent to the variation in gene expression occurring naturally in Arabidopsis. Transcriptome distance is a useful screening method to obtain insight in the pleiotropic effects of genetic modification.

  4. NASA Hazard Analysis Process

    NASA Technical Reports Server (NTRS)

    Deckert, George

    2010-01-01

    This viewgraph presentation reviews The NASA Hazard Analysis process. The contents include: 1) Significant Incidents and Close Calls in Human Spaceflight; 2) Subsystem Safety Engineering Through the Project Life Cycle; 3) The Risk Informed Design Process; 4) Types of NASA Hazard Analysis; 5) Preliminary Hazard Analysis (PHA); 6) Hazard Analysis Process; 7) Identify Hazardous Conditions; 8) Consider All Interfaces; 9) Work a Preliminary Hazard List; 10) NASA Generic Hazards List; and 11) Final Thoughts

  5. Modeling Commercial Turbofan Engine Icing Risk With Ice Crystal Ingestion

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.

    2013-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which are ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. The PSL test has helped to calibrate the engine icing computational tool to assess the risk of ice accretion. The results from the computer simulation identified prevalent trends in wet bulb temperature, ice particle melt ratio, and engine inlet temperature as a function of altitude for predicting engine icing risk due to ice crystal ingestion.

  6. The ARGO Project: assessing NA-TECH risks on off-shore oil platforms

    NASA Astrophysics Data System (ADS)

    Capuano, Paolo; Basco, Anna; Di Ruocco, Angela; Esposito, Simona; Fusco, Giannetta; Garcia-Aristizabal, Alexander; Mercogliano, Paola; Salzano, Ernesto; Solaro, Giuseppe; Teofilo, Gianvito; Scandone, Paolo; Gasparini, Paolo

    2017-04-01

    ARGO (Analysis of natural and anthropogenic risks on off-shore oil platforms) is a 2 years project, funded by the DGS-UNMIG (Directorate General for Safety of Mining and Energy Activities - National Mining Office for Hydrocarbons and Georesources) of Italian Ministry of Economic Development. The project, coordinated by AMRA (Center for the Analysis and Monitoring of Environmental Risk), aims at providing technical support for the analysis of natural and anthropogenic risks on offshore oil platforms. In order to achieve this challenging objective, ARGO brings together climate experts, risk management experts, seismologists, geologists, chemical engineers, earth and coastal observation experts. ARGO has developed methodologies for the probabilistic analysis of industrial accidents triggered by natural events (NA-TECH) on offshore oil platforms in the Italian seas, including extreme events related to climate changes. Furthermore the environmental effect of offshore activities has been investigated, including: changes on seismicity and on the evolution of coastal areas close to offshore platforms. Then a probabilistic multi-risk framework has been developed for the analysis of NA-TECH events on offshore installations for hydrocarbon extraction.

  7. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  8. From cradle-to-grave at the nanoscale: gaps in U.S. regulatory oversight along the nanomaterial life cycle.

    PubMed

    Beaudrie, Christian E H; Kandlikar, Milind; Satterfield, Terre

    2013-06-04

    Engineered nanomaterials (ENMs) promise great benefits for society, yet our knowledge of potential risks and best practices for regulation are still in their infancy. Toward the end of better practices, this paper analyzes U.S. federal environmental, health, and safety (EHS) regulations using a life cycle framework. It evaluates their adequacy as applied to ENMs to identify gaps through which emerging nanomaterials may escape regulation from initial production to end-of-life. High scientific uncertainty, a lack of EHS and product data, inappropriately designed exemptions and thresholds, and limited agency resources are a challenge to both the applicability and adequacy of current regulations. The result is that some forms of engineered nanomaterials may escape federal oversight and rigorous risk review at one or more stages along their life cycle, with the largest gaps occurring at the postmarket stages, and at points of ENM release to the environment. Oversight can be improved through pending regulatory reforms, increased research and development for the monitoring, control, and analysis of environmental and end-of-life releases, introduction of periodic re-evaluation of ENM risks, and fostering a "bottom-up" stewardship approach to the responsible management of risks from engineered nanomaterials.

  9. Development of requirements on safety cases of machine industry products for power engineering

    NASA Astrophysics Data System (ADS)

    Aronson, K. E.; Brezgin, V. I.; Brodov, Yu. M.; Gorodnova, N. V.; Kultyshev, A. Yu.; Tolmachev, V. V.; Shablova, E. G.

    2016-12-01

    This article considers security assurance for power engineering machinery in the design and production phases. The Federal Law "On Technical Regulation" and the Customs Union Technical Regulations "On Safety of Machinery and Equipment" are analyzed in the legal, technical, and economic aspect with regard to power engineering machine industry products. From the legal standpoint, it is noted that the practical enforcement of most norms of the Law "On Technical Regulation" makes it necessary to adopt subordinate statutory instruments currently unavailable; moreover, the current level of adoption of technical regulations leaves much to be desired. The intensive integration processes observed in the Eurasian Region in recent years have made it a more pressing task to harmonize the laws of the region's countries, including their technical regulation framework. The technical aspect of analyzing the technical regulation of the Customs Union has been appraised by the IDEF0 functional modeling method. The object of research is a steam turbine plant produced at the turbine works. When developing the described model, we considered the elaboration of safety case (SC) requirements from the standpoint of the chief designer of the turbine works as the person generally responsible for the elaboration of the SC document. The economic context relies on risk analysis and appraisal methods. In their respect, these are determined by the purposes and objectives of analysis, complexity of considered objects, availability of required data, and expertise of specialists hired to conduct the analysis. The article proposes the description of all sources of hazard and scenarios of their actualization in all production phases of machinery life cycle for safety assurance purposes. The detection of risks and hazards allows forming the list of unwanted events. It describes the sources of hazard, various risk factors, conditions for their rise and development, tentative risk appraisals, and elaboration of tentative guidelines for reducing hazard and risk levels.

  10. Existential risks: exploring a robust risk reduction strategy.

    PubMed

    Jebari, Karim

    2015-06-01

    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible (known and unknown) scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously.

  11. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  12. Evolution of seismic risk management for insurance over the past 30 years

    NASA Astrophysics Data System (ADS)

    Shah, Haresh C.; Dong, Weimin; Stojanovski, Pane; Chen, Alex

    2018-01-01

    During the past 30 years, there has been spectacular growth in the use of risk analysis and risk management tools developed by engineers in the financial and insurance sectors. The insurance, the reinsurance, and the investment banking sectors have enthusiastically adopted loss estimation tools developed by engineers in developing their business strategies and for managing their financial risks. As a result, insurance/reinsurance strategy has evolved as a major risk mitigation tool in managing catastrophe risk at the individual, corporate, and government level. This is particularly true in developed countries such as US, Western Europe, and Japan. Unfortunately, it has not received the needed attention in developing countries, where such a strategy for risk management is most needed. Fortunately, in the last five years, there has been excellent focus in developing "InsurTech" tools to address the much needed "Insurance for the Masses", especially for the Asian Markets. In the earlier years of catastrophe model development, risk analysts were mainly concerned with risk reduction options through engineering strategies, and relatively little attention was given to financial and economic strategies. Such state-of-affairs still exists in many developing countries. The new developments in the science and technologies of loss estimation due to natural catastrophes have made it possible for financial sectors to model their business strategies such as peril and geographic diversification, premium calculations, reserve strategies, reinsurance contracts, and other underwriting tools. These developments have not only changed the way in which financial sectors assess and manage their risks, but have also changed the domain of opportunities for engineers and scientists. This paper will address the issues related to developing insurance/reinsurance strategies to mitigate catastrophe risks and describe the role catastrophe risk insurance and reinsurance has played in managing financial risk due to natural catastrophes. Historical losses and the share of those losses covered by insurance will be presented. How such risk sharing can help the nation share the burden of losses between tax paying public, the "at risk" property owners, the insurers and the reinsurers will be discussed. The paper will summarize the tools that are used by the insurance and reinsurance companies for estimating their future losses due to catastrophic natural events. The paper will also show how the results of loss estimation technologies developed by engineers are communicated to the business flow of insurance/reinsurance companies. Finally, to make it possible to grow "Insurance for the Masses-IFM", the role played by parametric insurance products and InsurTech tools will be discussed.

  13. Ethical Risk Management Education in Engineering: A Systematic Review.

    PubMed

    Guntzburger, Yoann; Pauchant, Thierry C; Tanguy, Philippe A

    2017-04-01

    Risk management is certainly one of the most important professional responsibilities of an engineer. As such, this activity needs to be combined with complex ethical reflections, and this requirement should therefore be explicitly integrated in engineering education. In this article, we analyse how this nexus between ethics and risk management is expressed in the engineering education research literature. It was done by reviewing 135 articles published between 1980 and March 1, 2016. These articles have been selected from 21 major journals that specialize in engineering education, engineering ethics and ethics education. Our review suggests that risk management is mostly used as an anecdote or an example when addressing ethics issues in engineering education. Further, it is perceived as an ethical duty or requirement, achieved through rational and technical methods. However, a small number of publications do offer some critical analyses of ethics education in engineering and their implications for ethical risk and safety management. Therefore, we argue in this article that the link between risk management and ethics should be further developed in engineering education in order to promote the progressive change toward more socially and environmentally responsible engineering practices. Several research trends and issues are also identified and discussed in order to support the engineering education community in this project.

  14. Development, modeling, simulation, and testing of a novel propane-fueled Brayton-Gluhareff cycle acoustically-pressurized ramjet engine

    NASA Astrophysics Data System (ADS)

    Bramlette, Richard B.

    In the 1950s, Eugene Gluhareff built the first working "pressure jet" engine, a variation on the classical ramjet engine with a pressurized inlet system relying on sonic tuning which allowed operation at subsonic speeds. The engine was an unqualified success. Unfortunately, after decades of sales and research, Gluhareff passed away leaving behind no significant published studies of the engine or detailed analysis of its operation. The design was at serious risk of being lost to history. This dissertation is intended to address that risk by studying a novel subscale modification of Gluhareff's original design operating on the same principles. Included is a background of related engine and how the pressure jet is distinct. The preliminary sizing of a pressure jet using closed-form expressions is then discussed followed by a review of propane oxidation modeling, how it integrates into the Computational Fluid Dynamics (CFD) solver, and the modeling of the pressure jet engine cycle with CFD. The simulation was matched to experimental data recorded on a purpose-built test stand recording chamber pressure, exhaust speed (via a Pitot/static system), temperatures, and thrust force. The engine CFD simulation produced a wide range of qualitative results that matched the experimental data well and suggested strong recirculation flows through the engine confirming suspicions about how the engine operates. Engine operating frequency between CFD and experiment also showed good agreement and appeared to be driven by the "Kadenacy Effect." The research effort lastly opens the door for further study of the engine cycle, the use of pressurized intakes to produce static thrust in a ramjet engine, the Gluhareff pressure jet's original geometry, and a wide array of potential applications. A roadmap of further study and applications is detailed including a modeling and testing of larger engines.

  15. Structural Analysis Made 'NESSUSary'

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Everywhere you look, chances are something that was designed and tested by a computer will be in plain view. Computers are now utilized to design and test just about everything imaginable, from automobiles and airplanes to bridges and boats, and elevators and escalators to streets and skyscrapers. Computer-design engineering first emerged in the 1970s, in the automobile and aerospace industries. Since computers were in their infancy, however, architects and engineers during the time were limited to producing only designs similar to hand-drafted drawings. (At the end of 1970s, a typical computer-aided design system was a 16-bit minicomputer with a price tag of $125,000.) Eventually, computers became more affordable and related software became more sophisticated, offering designers the "bells and whistles" to go beyond the limits of basic drafting and rendering, and venture into more skillful applications. One of the major advancements was the ability to test the objects being designed for the probability of failure. This advancement was especially important for the aerospace industry, where complicated and expensive structures are designed. The ability to perform reliability and risk assessment without using extensive hardware testing is critical to design and certification. In 1984, NASA initiated the Probabilistic Structural Analysis Methods (PSAM) project at Glenn Research Center to develop analysis methods and computer programs for the probabilistic structural analysis of select engine components for current Space Shuttle and future space propulsion systems. NASA envisioned that these methods and computational tools would play a critical role in establishing increased system performance and durability, and assist in structural system qualification and certification. Not only was the PSAM project beneficial to aerospace, it paved the way for a commercial risk- probability tool that is evaluating risks in diverse, down- to-Earth application

  16. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    NASA Astrophysics Data System (ADS)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  17. Risk of large oil spills: a statistical analysis in the aftermath of Deepwater Horizon.

    PubMed

    Eckle, Petrissa; Burgherr, Peter; Michaux, Edouard

    2012-12-04

    The oil spill in the Gulf of Mexico that followed the explosion of the exploration platform Deepwater Horizon on 20 April 2010 was the largest accidental oil spill so far. In this paper we evaluate the risk of such very severe oil spills based on global historical data from our Energy-Related Severe Accident Database (ENSAD) and investigate if an accident of this size could have been "expected". We also compare the risk of oil spills from such accidents in exploration and production to accidental spills from other activities in the oil chain (tanker ship transport, pipelines, storage/refinery) and analyze the two components of risk, frequency and severity (quantity of oil spilled) separately. This detailed analysis reveals the differences in the structure of the risk between different spill sources, differences in trends over time and it allows in particular assessing the risk of very severe events such as the Deepwater Horizon spill. Such top down risk assessment can serve as an important input to decision making by complementing bottom up engineering risk assessment and can be combined with impact assessment in environmental risk analysis.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This document comprises Pacific Northwest National Laboratory`s report for Fiscal Year 1996 on research and development programs. The document contains 161 project summaries in 16 areas of research and development. The 16 areas of research and development reported on are: atmospheric sciences, biotechnology, chemical instrumentation and analysis, computer and information science, ecological science, electronics and sensors, health protection and dosimetry, hydrological and geologic sciences, marine sciences, materials science and engineering, molecular science, process science and engineering, risk and safety analysis, socio-technical systems analysis, statistics and applied mathematics, and thermal and energy systems. In addition, this report provides an overview ofmore » the research and development program, program management, program funding, and Fiscal Year 1997 projects.« less

  19. Space Shuttle Systems Engineering Processes for Liftoff Debris Risk Mitigation

    NASA Technical Reports Server (NTRS)

    Mitchell, Michael; Riley, Christopher

    2011-01-01

    This slide presentation reviews the systems engineering process designed to reduce the risk from debris during Space Shuttle Launching. This process begins the day of launch from the tanking to the vehicle tower clearance. Other debris risks (i.e., Ascent, and micrometeoroid orbital debit) are mentioned) but are not the subject of this presentation. The Liftoff debris systems engineering process and an example of how it works are reviewed (i.e.,STS-119 revealed a bolt liberation trend on the Fixed Service Structure (FSS) 275 level elevator room). The process includes preparation of a Certification of Flight Readiness (CoFR) that includes (1) Lift-off debris from previous mission dispositioned, (2) Flight acceptance rationale has been provided for Lift-off debris sources/causes (3) Lift-off debris mission support documentation, processes and tools are in place for the up-coming mission. The process includes a liftoff debris data collection that occurs after each launch. This includes a post launch walkdown, that records each liftoff debris, and the entry of the debris into a database, it also includes a review of the imagery from the launch, and a review of the instrumentation data. There is also a review of the debris transport analysis process, that includes temporal and spatial framework and a computational fluid dynamics (CFD) analysis. which incorporates a debris transport analyses (DTA), debris materials and impact tests, and impact analyses.

  20. Turbine Design and Analysis for the J-2X Engine Turbopumps

    NASA Technical Reports Server (NTRS)

    Marcu, Bogdan; Tran, Ken; Dorney, Daniel J.; Schmauch, Preston

    2008-01-01

    Pratt and Whitney Rocketdyne and NASA Marshall Space Flight Center are developing the advanced upper stage J-2X engine based on the legacy design of the J-2/J-2S family of engines which powered the Apollo missions. The cryogenic propellant turbopumps have been denoted as Mark72-F and Mark72-0 for the fuel and oxidizer side, respectively. Special attention is focused on preserving the essential flight-proven design features while adapting the design to the new turbopump configuration. Advanced 3-D CFD analysis has been employed to verify turbine aero performance at current flow regime boundary conditions and to mitigate risks associated with stresses. A limited amount of redesign and overall configuration modifications allow for a robust design with performance level matching or exceeding requirement.

  1. Energy efficient engine: Preliminary design and integration studies

    NASA Technical Reports Server (NTRS)

    Johnston, R. P.; Hirschkron, R.; Koch, C. C.; Neitzel, R. E.; Vinson, P. W.

    1978-01-01

    Parametric design and mission evaluations of advanced turbofan configurations were conducted for future transport aircraft application. Economics, environmental suitability and fuel efficiency were investigated and compared with goals set by NASA. Of the candidate engines which included mixed- and separate-flow, direct-drive and geared configurations, an advanced mixed-flow direct-drive configuration was selected for further design and evaluation. All goals were judged to have been met except the acoustic goal. Also conducted was a performance risk analysis and a preliminary aerodynamic design of the 10 stage 23:1 pressure ratio compressor used in the study engines.

  2. Space Launch System NASA Research Announcement Advanced Booster Engineering Demonstration and/or Risk Reduction

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Craig, Kellie D.

    2011-01-01

    The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction

  3. Information and problem report usage in system saftey engineering division

    NASA Technical Reports Server (NTRS)

    Morrissey, Stephen J.

    1990-01-01

    Five basic problems or question areas are examined. They are as follows: (1) Evaluate adequacy of current problem/performance data base; (2) Evaluate methods of performing trend analysis; (3) Methods and sources of data for probabilistic risk assessment; and (4) How is risk assessment documentation upgraded and/or updated. The fifth problem was to provide recommendations for each of the above four areas.

  4. Review of asset hierarchy criticality assessment and risk analysis practices.

    DOT National Transportation Integrated Search

    2014-01-01

    The MTA NYC Transit (NYCT) has begun an enterprise-wide Asset Management Improvement Program (AMIP). In : 2012, NYCT developed an executive-level concept of operations that defined a new asset management : framework following a systems engineering ap...

  5. Construction Management: Planning Ahead.

    ERIC Educational Resources Information Center

    Arsht, Steven

    2003-01-01

    Explains that preconstruction planning is essential when undertaking the challenges of a school building renovation or expansion, focusing on developing a detailed estimate, creating an effective construction strategy, conducting reviews and value-engineering workshops, and realizing savings through effective risk analysis and contingency…

  6. The true meaning of 'exotic species' as a model for genetically engineered organisms.

    PubMed

    Regal, P J

    1993-03-15

    The exotic or non-indigenous species model for deliberately introduced genetically engineered organisms (GEOs) has often been misunderstood or misrepresented. Yet proper comparisons of of ecologically competent GEOs to the patterns of adaptation of introduced species have been highly useful among scientists in attempting to determine how to apply biological theory to specific GEO risk issues, and in attempting to define the probabilities and scale of ecological risks with GEOs. In truth, the model predicts that most projects may be environmentally safe, but a significant minority may be very risky. The model includes a history of institutional follies that also should remind workers of the danger of oversimplifying biological issues, and warn against repeating the sorts of professional misjudgements that have too often been made in introducing organisms to new settings. We once expected that the non-indigenous species model would be refined by more analysis of species eruptions, ecological genetics, and the biology of select GEOs themselves, as outlined. But there has been political resistance to the effective regulation of GEOs, and a bureaucratic tendency to focus research agendas on narrow data collection. Thus there has been too little promotion by responsible agencies of studies to provide the broad conceptual base for truly science-based regulation. In its presently unrefined state, the non-indigenous species comparison would overestimate the risks of GEOs if it were (mis)applied to genetically disrupted, ecologically crippled GEOs, but in some cases of wild-type organisms with novel engineered traits, it could greatly underestimate the risks. Further analysis is urgently needed.

  7. Cloud Geospatial Analysis Tools for Global-Scale Comparisons of Population Models for Decision Making

    NASA Astrophysics Data System (ADS)

    Hancher, M.; Lieber, A.; Scott, L.

    2017-12-01

    The volume of satellite and other Earth data is growing rapidly. Combined with information about where people are, these data can inform decisions in a range of areas including food and water security, disease and disaster risk management, biodiversity, and climate adaptation. Google's platform for planetary-scale geospatial data analysis, Earth Engine, grants access to petabytes of continually updating Earth data, programming interfaces for analyzing the data without the need to download and manage it, and mechanisms for sharing the analyses and publishing results for data-driven decision making. In addition to data about the planet, data about the human planet - population, settlement and urban models - are now available for global scale analysis. The Earth Engine APIs enable these data to be joined, combined or visualized with economic or environmental indicators such as nighttime lights trends, global surface water, or climate projections, in the browser without the need to download anything. We will present our newly developed application intended to serve as a resource for government agencies, disaster response and public health programs, or other consumers of these data to quickly visualize the different population models, and compare them to ground truth tabular data to determine which model suits their immediate needs. Users can further tap into the power of Earth Engine and other Google technologies to perform a range of analysis from simple statistics in custom regions to more complex machine learning models. We will highlight case studies in which organizations around the world have used Earth Engine to combine population data with multiple other sources of data, such as water resources and roads data, over deep stacks of temporal imagery to model disease risk and accessibility to inform decisions.

  8. Tailoring Systems Engineering Processes in a Conceptual Design Environment: A Case Study at NASA Marshall Spaceflight Center's ACO

    NASA Technical Reports Server (NTRS)

    Mulqueen, John; Maples, C. Dauphne; Fabisinski, Leo, III

    2012-01-01

    This paper provides an overview of Systems Engineering as it is applied in a conceptual design space systems department at the National Aeronautics and Space Administration (NASA) Marshall Spaceflight Center (MSFC) Advanced Concepts Office (ACO). Engineering work performed in the NASA MFSC's ACO is targeted toward the Exploratory Research and Concepts Development life cycle stages, as defined in the International Council on Systems Engineering (INCOSE) System Engineering Handbook. This paper addresses three ACO Systems Engineering tools that correspond to three INCOSE Technical Processes: Stakeholder Requirements Definition, Requirements Analysis, and Integration, as well as one Project Process Risk Management. These processes are used to facilitate, streamline, and manage systems engineering processes tailored for the earliest two life cycle stages, which is the environment in which ACO engineers work. The role of systems engineers and systems engineering as performed in ACO is explored in this paper. The need for tailoring Systems Engineering processes, tools, and products in the ever-changing engineering services ACO provides to its customers is addressed.

  9. Tools and Methods for Risk Management in Multi-Site Engineering Projects

    NASA Astrophysics Data System (ADS)

    Zhou, Mingwei; Nemes, Laszlo; Reidsema, Carl; Ahmed, Ammar; Kayis, Berman

    In today's highly global business environment, engineering and manufacturing projects often involve two or more geographically dispersed units or departments, research centers or companies. This paper attempts to identify the requirements for risk management in a multi-site engineering project environment, and presents a review of the state-of-the-art tools and methods that can be used to manage risks in multi-site engineering projects. This leads to the development of a risk management roadmap, which will underpin the design and implementation of an intelligent risk mapping system.

  10. Enhancing Interdisciplinary Human System Risk Research Through Modeling and Network Approaches

    NASA Technical Reports Server (NTRS)

    Mindock, Jennifer; Lumpkins, Sarah; Shelhamer, Mark

    2015-01-01

    NASA's Human Research Program (HRP) supports research to reduce human health and performance risks inherent in future human space exploration missions. Understanding risk outcomes and contributing factors in an integrated manner allows HRP research to support development of efficient and effective mitigations from cross-disciplinary perspectives, and to enable resilient human and engineered systems for spaceflight. The purpose of this work is to support scientific collaborations and research portfolio management by utilizing modeling for analysis and visualization of current and potential future interdisciplinary efforts.

  11. Advanced Technology Spark-Ignition Aircraft Piston Engine Design Study

    NASA Technical Reports Server (NTRS)

    Stuckas, K. J.

    1980-01-01

    The advanced technology, spark ignition, aircraft piston engine design study was conducted to determine the improvements that could be made by taking advantage of technology that could reasonably be expected to be made available for an engine intended for production by January 1, 1990. Two engines were proposed to account for levels of technology considered to be moderate risk and high risk. The moderate risk technology engine is a homogeneous charge engine operating on avgas and offers a 40% improvement in transportation efficiency over present designs. The high risk technology engine, with a stratified charge combustion system using kerosene-based jet fuel, projects a 65% improvement in transportation efficiency. Technology enablement program plans are proposed herein to set a timetable for the successful integration of each item of required advanced technology into the engine design.

  12. POLLUTION PREVENTION RESEARCH ONGOING - EPA'S RISK REDUCTION ENGINEERING LABORATORY

    EPA Science Inventory

    The mission of the Risk Reduction Engineering Laboratory is to advance the understanding, development and application of engineering solutions for the prevention or reduction of risks from environmental contamination. This mission is accomplished through basic and applied researc...

  13. Advanced Vibration Analysis Tool Developed for Robust Engine Rotor Designs

    NASA Technical Reports Server (NTRS)

    Min, James B.

    2005-01-01

    The primary objective of this research program is to develop vibration analysis tools, design tools, and design strategies to significantly improve the safety and robustness of turbine engine rotors. Bladed disks in turbine engines always feature small, random blade-to-blade differences, or mistuning. Mistuning can lead to a dramatic increase in blade forced-response amplitudes and stresses. Ultimately, this results in high-cycle fatigue, which is a major safety and cost concern. In this research program, the necessary steps will be taken to transform a state-of-the-art vibration analysis tool, the Turbo- Reduce forced-response prediction code, into an effective design tool by enhancing and extending the underlying modeling and analysis methods. Furthermore, novel techniques will be developed to assess the safety of a given design. In particular, a procedure will be established for using natural-frequency curve veerings to identify ranges of operating conditions (rotational speeds and engine orders) in which there is a great risk that the rotor blades will suffer high stresses. This work also will aid statistical studies of the forced response by reducing the necessary number of simulations. Finally, new strategies for improving the design of rotors will be pursued.

  14. Applicability of risk-based management and the need for risk-based economic decision analysis at hazardous waste contaminated sites.

    PubMed

    Khadam, Ibrahim; Kaluarachchi, Jagath J

    2003-07-01

    Decision analysis in subsurface contamination management is generally carried out through a traditional engineering economic viewpoint. However, new advances in human health risk assessment, namely, the probabilistic risk assessment, and the growing awareness of the importance of soft data in the decision-making process, require decision analysis methodologies that are capable of accommodating non-technical and politically biased qualitative information. In this work, we discuss the major limitations of the currently practiced decision analysis framework, which evolves around the definition of risk and cost of risk, and its poor ability to communicate risk-related information. A demonstration using a numerical example was conducted to provide insight on these limitations of the current decision analysis framework. The results from this simple ground water contamination and remediation scenario were identical to those obtained from studies carried out on existing Superfund sites, which suggests serious flaws in the current risk management framework. In order to provide a perspective on how these limitations may be avoided in future formulation of the management framework, more matured and well-accepted approaches to decision analysis in dam safety and the utility industry, where public health and public investment are of great concern, are presented and their applicability in subsurface remediation management is discussed. Finally, in light of the success of the application of risk-based decision analysis in dam safety and the utility industry, potential options for decision analysis in subsurface contamination management are discussed.

  15. The Effect of Modified Control Limits on the Performance of a Generic Commercial Aircraft Engine

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; May, Ryan D.; Gou, Ten-Huei; Litt, Jonathan S.

    2012-01-01

    This paper studies the effect of modifying the control limits of an aircraft engine to obtain additional performance. In an emergency situation, the ability to operate an engine above its normal operating limits and thereby gain additional performance may aid in the recovery of a distressed aircraft. However, the modification of an engine s limits is complex due to the risk of an engine failure. This paper focuses on the tradeoff between enhanced performance and risk of either incurring a mechanical engine failure or compromising engine operability. The ultimate goal is to increase the engine performance, without a large increase in risk of an engine failure, in order to increase the probability of recovering the distressed aircraft. The control limit modifications proposed are to extend the rotor speeds, temperatures, and pressures to allow more thrust to be produced by the engine, or to increase the rotor accelerations and allow the engine to follow a fast transient. These modifications do result in increased performance; however this study indicates that these modifications also lead to an increased risk of engine failure.

  16. C-Band Airport Surface Communications System Engineering-Initial High-Level Safety Risk Assessment and Mitigation

    NASA Technical Reports Server (NTRS)

    Zelkin, Natalie; Henriksen, Stephen

    2011-01-01

    This document is being provided as part of ITT's NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract: "New ATM Requirements--Future Communications, C-Band and L-Band Communications Standard Development." ITT has completed a safety hazard analysis providing a preliminary safety assessment for the proposed C-band (5091- to 5150-MHz) airport surface communication system. The assessment was performed following the guidelines outlined in the Federal Aviation Administration Safety Risk Management Guidance for System Acquisitions document. The safety analysis did not identify any hazards with an unacceptable risk, though a number of hazards with a medium risk were documented. This effort represents an initial high-level safety hazard analysis and notes the triggers for risk reassessment. A detailed safety hazards analysis is recommended as a follow-on activity to assess particular components of the C-band communication system after the profile is finalized and system rollout timing is determined. A security risk assessment has been performed by NASA as a parallel activity. While safety analysis is concerned with a prevention of accidental errors and failures, the security threat analysis focuses on deliberate attacks. Both processes identify the events that affect operation of the system; and from a safety perspective the security threats may present safety risks.

  17. Biopreparedness in the Age of Genetically Engineered Pathogens and Open Access Science: An Urgent Need for a Paradigm Shift.

    PubMed

    MacIntyre, C Raina

    2015-09-01

    Our systems, thinking, training, legislation, and policies are lagging far behind momentous changes in science, and leaving us vulnerable in biosecurity. Synthetic viruses and genetic engineering of pathogens are a reality, with a rapid acceleration of dual-use science. The public availability of methods for dual-use genetic engineering, coupled with the insider threat, poses an unprecedented risk for biosecurity. Case studies including the 1984 Rajneesh salmonella bioterrorism attack and the controversy over engineered transmissible H5N1 influenza are analyzed. Simple probability analysis shows that the risks of dual-use research are likely to outweigh potential benefits, yet this type of analysis has not been done to date. Many bioterrorism agents may also occur naturally. Distinguishing natural from unnatural epidemics is far more difficult than other types of terrorism. Public health systems do not have mechanisms for routinely considering bioterrorism, and an organizational culture that is reluctant to consider it. A collaborative model for flagging aberrant outbreak patterns and referral from the health to security sectors is proposed. Vulnerabilities in current approaches to biosecurity need to be reviewed and strengthened collaboratively by all stakeholders. New systems, legislation, collaborative operational models, and ways of thinking are required to effectively address the threat to global biosecurity. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  18. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  19. Failure Mode and Effects Analysis (FMEA) Introductory Overview

    DTIC Science & Technology

    2012-06-14

    Failure Mode and Effects Analysis ( FMEA ) Introductory Overview TARDEC Systems Engineering Risk Management Team POC: Kadry Rizk or Gregor Ratajczak...2. REPORT TYPE Briefing Charts 3. DATES COVERED 01-05-2012 to 23-05-2012 4. TITLE AND SUBTITLE Failure Mode and Effects Analysis ( FMEA ) 5a...18 WELCOME Welcome to “An introductory overview of Failure Mode and Effects Analysis ( FMEA )”, A brief concerning the use and benefits of FMEA

  20. Developing an industry-oriented safety curriculum using the Delphi technique.

    PubMed

    Chen, Der-Fa; Wu, Tsung-Chih; Chen, Chi-Hsiang; Chang, Shu-Hsuan; Yao, Kai-Chao; Liao, Chin-Wen

    2016-09-01

    In this study, we examined the development of industry-oriented safety degree curricula at a college level. Based on a review of literature on the practices and study of the development of safety curricula, we classified occupational safety and health curricula into the following three domains: safety engineering, health engineering, and safety and health management. We invited 44 safety professionals to complete a four-round survey that was designed using a modified Delphi technique. We used Chi-square statistics to test the panel experts' consensus on the significance of the items in the three domains and employed descriptive statistics to rank the participants' rating of each item. The results showed that the top three items for each of the three domains were Risk Assessment, Dangerous Machinery and Equipment, and Fire and Explosion Prevention for safety engineering; Ergonomics, Industrial Toxicology, and Health Risk Assessment for health engineering; and Industrial Safety and Health Regulations, Accident Investigation and Analysis, and Emergency Response for safety and health management. Only graduates from safety programmes who possess practical industry-oriented abilities can satisfy industry demands and provide value to the existence of college safety programmes.

  1. Improved FTA methodology and application to subsea pipeline reliability design.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.

  2. Improved FTA Methodology and Application to Subsea Pipeline Reliability Design

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681

  3. Engineered nanoparticles at the workplace: current knowledge about workers' risk.

    PubMed

    Pietroiusti, A; Magrini, A

    2014-07-01

    The novel physicochemical properties of engineered nanoparticles (ENPs) make them very attractive for industrial and biomedical purposes, but concerns have been raised regarding unpredictable adverse health effects in humans. Current evidence for the risk posed by ENPs to exposed workers is the subject of this review. To perform an in-depth review of the state of art of nanoparticle exposure at work. Original articles and reviews in Pubmed and in principal databases of medical literature up to 2013 were included in the analysis. In addition, grey literature released by qualified regulatory agencies and by governmental and non-governmental organizations was also taken into consideration. There are significant knowledge and technical gaps to be filled for a reliable evaluation of the risk posed for workers by ENPs. Evidence for potential workplace release of ENPs however seems substantial, and the amount of exposure may exceed the proposed occupational exposure limits (OELs). The rational use of conventional engineering measures and of protective personal equipment seems to mitigate the risk. A precautionary approach is recommended for workplace exposure to ENPs, until health-based OELs are developed and released by official regulatory agencies. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Microeconomic analysis of military aircraft bearing restoration

    NASA Technical Reports Server (NTRS)

    Hein, G. F.

    1976-01-01

    The risk and cost of a bearing restoration by grinding program was analyzed. A microeconomic impact analysis was performed. The annual cost savings to U.S. Army aviation is approximately $950,000.00 for three engines and three transmissions. The capital value over an indefinite life is approximately ten million dollars. The annual cost savings for U.S. Air Force engines is approximately $313,000.00 with a capital value of approximately 3.1 million dollars. The program will result in the government obtaining bearings at lower costs at equivalent reliability. The bearing industry can recover lost profits during a period of reduced demand and higher costs.

  5. Impacts of climate change on coastal flood risk in England and Wales: 2030-2100.

    PubMed

    Hall, Jim W; Sayers, Paul B; Walkden, Mike J A; Panzeri, Mike

    2006-04-15

    Coastal flood risk is a function of the probability of coastal flooding and the consequential damage. Scenarios of potential changes in coastal flood risk due to changes in climate, society and the economy over the twenty-first century have been analysed using a national-scale quantified flood risk analysis methodology. If it is assumed that there will be no adaptation to increasing coastal flood risk, the expected annual damage in England and Wales due to coastal flooding is predicted to increase from the current 0.5 billion pounds to between 1.0 pound and 13.5 billion pounds, depending on the scenario of climate and socio-economic change. The proportion of national flood risk that is attributable to coastal flooding is projected to increase from roughly 50% to between 60 and 70%. Scenarios of adaptation to increasing risk, by construction of coastal dikes or retreat from coastal floodplains, are analysed. These adaptations are shown to be able to reduce coastal flood risk to between 0.2 pounds and 0.8 billion pounds. The capital cost of the associated coastal engineering works is estimated to be between 12 pounds and 40 billion pounds. Non-structural measures to reduce risk can make a major contribution to reducing the cost and environmental impact of engineering measures.

  6. Engineering and Safety Partnership Enhances Safety of the Space Shuttle Program (SSP)

    NASA Technical Reports Server (NTRS)

    Duarte, Alberto

    2007-01-01

    Project Management must use the risk assessment documents (RADs) as tools to support their decision making process. Therefore, these documents have to be initiated, developed, and evolved parallel to the life of the project. Technical preparation and safety compliance of these documents require a great deal of resources. Updating these documents after-the-fact not only requires substantial increase in resources - Project Cost -, but this task is also not useful and perhaps an unnecessary expense. Hazard Reports (HRs), Failure Modes and Effects Analysis (FMEAs), Critical Item Lists (CILs), Risk Management process are, among others, within this category. A positive action resulting from a strong partnership between interested parties is one way to get these documents and related processes and requirements, released and updated in useful time. The Space Shuttle Program (SSP) at the Marshall Space Flight Center has implemented a process which is having positive results and gaining acceptance within the Agency. A hybrid Panel, with equal interest and responsibilities for the two larger organizations, Safety and Engineering, is the focal point of this process. Called the Marshall Safety and Engineering Review Panel (MSERP), its charter (Space Shuttle Program Directive 110 F, April 15, 2005), and its Operating Control Plan emphasizes the technical and safety responsibilities over the program risk documents: HRs; FMEA/CILs; Engineering Changes; anomalies/problem resolutions and corrective action implementations, and trend analysis. The MSERP has undertaken its responsibilities with objectivity, assertiveness, dedication, has operated with focus, and has shown significant results and promising perspectives. The MSERP has been deeply involved in propulsion systems and integration, real time technical issues and other relevant reviews, since its conception. These activities have transformed the propulsion MSERP in a truly participative and value added panel, making a difference for the safety of the Space Shuttle Vehicle, its crew, and personnel. Because of the MSERP's valuable contribution to the assessment of safety risk for the SSP, this paper also proposes an enhanced Panel concept that takes this successful partnership concept to a higher level of 'true partnership'. The proposed panel is aimed to be responsible for the review and assessment of all risk relative to Safety for new and future aerospace and related programs.

  7. Analysis and Derivation of Allocations for Fiber Contaminants in Liquid Bipropellant Systems

    NASA Technical Reports Server (NTRS)

    Lowrey, N. M; ibrahim, K. Y.

    2012-01-01

    An analysis was performed to identify the engineering rationale for the existing particulate limits in MSFC-SPEC-164, Cleanliness of Components for Use in Oxygen, Fuel, and Pneumatic Systems, determine the applicability of this rationale to fibers, identify potential risks that may result from fiber contamination in liquid oxygen/fuel bipropellant systems, and bound each of these risks. The objective of this analysis was to determine whether fiber contamination exceeding the established quantitative limits for particulate can be tolerated in these systems and, if so, to derive and recommend quantitative allocations for fibers beyond the limits established for other particulate. Knowledge gaps were identified that limit a complete understanding of the risk of promoted ignition from an accumulation of fibers in a gaseous oxygen system.

  8. Aspects of the BPRIM Language for Risk Driven Process Engineering

    NASA Astrophysics Data System (ADS)

    Sienou, Amadou; Lamine, Elyes; Pingaud, Hervé; Karduck, Achim

    Nowadays organizations are exposed to frequent changes in business environment requiring continuous alignment of business processes on business strategies. This agility requires methods promoted in enterprise engineering approaches. Risk consideration in enterprise engineering is getting important since the business environment is becoming more and more competitive and unpredictable. Business processes are subject to the same quality requirements as material and human resources. Thus, process management is supposed to tackle value creation challenges but also the ones related to value preservation. Our research considers risk driven business process design as an integral part of enterprise engineering. A graphical modelling language for risk driven business process engineering was introduced in former research. This paper extends the language and handles questions related to modelling risk in organisational context.

  9. Photophoretic levitation of engineered aerosols for geoengineering

    PubMed Central

    Keith, David W.

    2010-01-01

    Aerosols could be injected into the upper atmosphere to engineer the climate by scattering incident sunlight so as to produce a cooling tendency that may mitigate the risks posed by the accumulation of greenhouse gases. Analysis of climate engineering has focused on sulfate aerosols. Here I examine the possibility that engineered nanoparticles could exploit photophoretic forces, enabling more control over particle distribution and lifetime than is possible with sulfates, perhaps allowing climate engineering to be accomplished with fewer side effects. The use of electrostatic or magnetic materials enables a class of photophoretic forces not found in nature. Photophoretic levitation could loft particles above the stratosphere, reducing their capacity to interfere with ozone chemistry; and, by increasing particle lifetimes, it would reduce the need for continual replenishment of the aerosol. Moreover, particles might be engineered to drift poleward enabling albedo modification to be tailored to counter polar warming while minimizing the impact on equatorial climates. PMID:20823254

  10. Photophoretic levitation of engineered aerosols for geoengineering.

    PubMed

    Keith, David W

    2010-09-21

    Aerosols could be injected into the upper atmosphere to engineer the climate by scattering incident sunlight so as to produce a cooling tendency that may mitigate the risks posed by the accumulation of greenhouse gases. Analysis of climate engineering has focused on sulfate aerosols. Here I examine the possibility that engineered nanoparticles could exploit photophoretic forces, enabling more control over particle distribution and lifetime than is possible with sulfates, perhaps allowing climate engineering to be accomplished with fewer side effects. The use of electrostatic or magnetic materials enables a class of photophoretic forces not found in nature. Photophoretic levitation could loft particles above the stratosphere, reducing their capacity to interfere with ozone chemistry; and, by increasing particle lifetimes, it would reduce the need for continual replenishment of the aerosol. Moreover, particles might be engineered to drift poleward enabling albedo modification to be tailored to counter polar warming while minimizing the impact on equatorial climates.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almajali, Anas; Rice, Eric; Viswanathan, Arun

    This paper presents a systems analysis approach to characterizing the risk of a Smart Grid to a load-drop attack. A characterization of the risk is necessary for the design of detection and remediation strategies to address the consequences of such attacks. Using concepts from systems health management and system engineering, this work (a) first identifies metrics that can be used to generate constraints for security features, and (b) lays out an end-to-end integrated methodology using separate network and power simulations to assess system risk. We demonstrate our approach by performing a systems-style analysis of a load-drop attack implemented over themore » AMI subsystem and targeted at destabilizing the underlying power grid.« less

  12. Needs and challenges for assessing the environmental impacts of engineered nanomaterials (ENMs)

    PubMed Central

    Romero-Franco, Michelle; Godwin, Hilary A; Bilal, Muhammad

    2017-01-01

    The potential environmental impact of nanomaterials is a critical concern and the ability to assess these potential impacts is top priority for the progress of sustainable nanotechnology. Risk assessment tools are needed to enable decision makers to rapidly assess the potential risks that may be imposed by engineered nanomaterials (ENMs), particularly when confronted by the reality of limited hazard or exposure data. In this review, we examine a range of available risk assessment frameworks considering the contexts in which different stakeholders may need to assess the potential environmental impacts of ENMs. Assessment frameworks and tools that are suitable for the different decision analysis scenarios are then identified. In addition, we identify the gaps that currently exist between the needs of decision makers, for a range of decision scenarios, and the abilities of present frameworks and tools to meet those needs. PMID:28546894

  13. Cost-Benefit Analysis Methodology: Install Commercially Compliant Engines on National Security Exempted Vessels?

    DTIC Science & Technology

    2015-11-05

    impact analyses) satisfactorily encompasses the fundamentals of environmental health risk and can be applied to all mobile and stationary equipment...regulations. This paper does not seek to justify the EPA MHB approach, but explains the fundamentals and describes how the MHB concept can be...satisfactorily encompasses the fundamentals of environmental health risk and can be applied to all mobile and stationary equipment types. 15. SUBJECT TERMS

  14. Working conditions in the engine department - A qualitative study among engine room personnel on board Swedish merchant ships.

    PubMed

    Lundh, Monica; Lützhöft, Margareta; Rydstedt, Leif; Dahlman, Joakim

    2011-01-01

    The specific problems associated with the work on board within the merchant fleet are well known and have over the years been a topic of discussion. The work conditions in the engine room (ER) are demanding due to, e.g. the thermal climate, noise and awkward working postures. The work in the engine control room (ECR) has over recent years undergone major changes, mainly due to the introduction of computers on board. In order to capture the impact these changes had implied, and also to investigate how the work situation has developed, a total of 20 engine officers and engine ratings were interviewed. The interviews were semi-structured and Grounded Theory was used for the data analysis. The aim of the present study was to describe how the engine crew perceive their work situation and working environment on board. Further, the aim was to identify areas for improvements which the engine crew consider especially important for a safe and effective work environment. The result of the study shows that the design of the ECR and ER is crucial for how different tasks are performed. Design which does not support operational procedures and how tasks are performed risk inducing inappropriate behaviour as the crew members' are compelled to find alternative ways to perform their tasks in order to get the job done. These types of behaviour can induce an increased risk of exposure to hazardous substances and the engine crew members becoming injured. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Software And Systems Engineering Risk Management

    DTIC Science & Technology

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  16. User engineering: A new look at system engineering

    NASA Technical Reports Server (NTRS)

    Mclaughlin, Larry L.

    1987-01-01

    User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design.

  17. Chronic lymphatic leukaemia and engine exhausts, fresh wood, and DDT: a case-referent study.

    PubMed Central

    Flodin, U; Fredriksson, M; Persson, B; Axelson, O

    1988-01-01

    The effect of potential risk factors for chronic lymphatic leukaemia was evaluated in a case-referent study encompassing 111 cases and 431 randomised referents, all alive. Information on exposure was obtained by questionnaires posted to the subjects. Crude rate ratios were increased for occupational exposure to solvents. DDT, engine exhausts, fresh wood (lumberjacks, paper pulp workers, and sawmill workers, for example) and also in farming. Further analysis of the material by means of the Miettinen confounder score technique reduced the number of rate ratios significantly exceeding unity to encompass only occupational exposure to engine exhaust, fresh wood, DDT, and contact with horses. PMID:2449239

  18. Bearing restoration by grinding

    NASA Technical Reports Server (NTRS)

    Hanau, H.; Parker, R. J.; Zaretsky, E. V.; Chen, S. M.; Bull, H. L.

    1976-01-01

    A joint program was undertaken by the NASA Lewis Research Center and the Army Aviation Systems Command to restore by grinding those rolling-element bearings which are currently being discarded at aircraft engine and transmission overhaul. Three bearing types were selected from the UH-1 helicopter engine (T-53) and transmission for the pilot program. No bearing failures occurred related to the restoration by grinding process. The risk and cost of a bearing restoration by grinding programs was analyzed. A microeconomic impact analysis was performed.

  19. L-Band Digital Aeronautical Communications System Engineering - Initial Safety and Security Risk Assessment and Mitigation

    NASA Technical Reports Server (NTRS)

    Zelkin, Natalie; Henriksen, Stephen

    2011-01-01

    This document is being provided as part of ITT's NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: "New ATM Requirements--Future Communications, C-Band and L-Band Communications Standard Development." ITT has completed a safety hazard analysis providing a preliminary safety assessment for the proposed L-band (960 to 1164 MHz) terrestrial en route communications system. The assessment was performed following the guidelines outlined in the Federal Aviation Administration Safety Risk Management Guidance for System Acquisitions document. The safety analysis did not identify any hazards with an unacceptable risk, though a number of hazards with a medium risk were documented. This effort represents a preliminary safety hazard analysis and notes the triggers for risk reassessment. A detailed safety hazards analysis is recommended as a follow-on activity to assess particular components of the L-band communication system after the technology is chosen and system rollout timing is determined. The security risk analysis resulted in identifying main security threats to the proposed system as well as noting additional threats recommended for a future security analysis conducted at a later stage in the system development process. The document discusses various security controls, including those suggested in the COCR Version 2.0.

  20. RAVEN: a GUI and an Artificial Intelligence Engine in a Dynamic PRA Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Rabiti; D. Mandelli; A. Alfonsi

    Increases in computational power and pressure for more accurate simulations and estimations of accident scenario consequences are driving the need for Dynamic Probabilistic Risk Assessment (PRA) [1] of very complex models. While more sophisticated algorithms and computational power address the back end of this challenge, the front end is still handled by engineers that need to extract meaningful information from the large amount of data and build these complex models. Compounding this problem is the difficulty in knowledge transfer and retention, and the increasing speed of software development. The above-described issues would have negatively impacted deployment of the new highmore » fidelity plant simulator RELAP-7 (Reactor Excursion and Leak Analysis Program) at Idaho National Laboratory. Therefore, RAVEN that was initially focused to be the plant controller for RELAP-7 will help mitigate future RELAP-7 software engineering risks. In order to accomplish this task, Reactor Analysis and Virtual Control Environment (RAVEN) has been designed to provide an easy to use Graphical User Interface (GUI) for building plant models and to leverage artificial intelligence algorithms in order to reduce computational time, improve results, and help the user to identify the behavioral pattern of the Nuclear Power Plants (NPPs). In this paper we will present the GUI implementation and its current capability status. We will also introduce the support vector machine algorithms and show our evaluation of their potentiality in increasing the accuracy and reducing the computational costs of PRA analysis. In this evaluation we will refer to preliminary studies performed under the Risk Informed Safety Margins Characterization (RISMC) project of the Light Water Reactors Sustainability (LWRS) campaign [3]. RISMC simulation needs and algorithm testing are currently used as a guidance to prioritize RAVEN developments relevant to PRA.« less

  1. Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.; Mavris, Dimitri N.

    2006-01-01

    An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.

  2. Migrant workers in Italy: an analysis of injury risk taking into account occupational characteristics and job tenure.

    PubMed

    Giraudo, Massimiliano; Bena, Antonella; Costa, Giuseppe

    2017-04-22

    Migrants resident in Italy exceeded 5 million in 2015, representing 8.2% of the resident population. The study of the mechanisms that explain the differential health of migrant workers (as a whole and for specific nationalities) has been identified as a priority for research. The international literature has shown that migrant workers have a higher risk of total and fatal injury than natives, but some results are conflicting. The aim of this paper is to study the injury risk differentials between migrants, born in countries with strong migratory pressure (SMPC), and workers born in high income countries (HIC), taking into account individual and firm characteristics and job tenure. In addition to a comprehensive analysis of occupational safety among migrants, the study focuses on Moroccans, the largest community in Italy in the years of the analysis. Using the Work History Italian Panel-Salute integrated database, only contracts of employment in the private sector, starting in the period between 2000 and 2005 and held by men, were selected. The analysis focused on economic sectors with an important foreign component: engineering, construction, wholesale and retail trade, transportation and storage. Injury rates were calculated using a definition of serious occupational injuries based on the type of injury. Incidence rate ratios (IRR) were calculated using a Poisson distribution for panel data taking into account time-dependent variables. Injury rates among SMPC workers were higher than for HIC workers in engineering (15.61 ‰ py vs. 8.92 ‰ py), but there were no significant differences in construction (11.21 vs. 10.09), transportation and storage (7.82 vs. 7.23) and the wholesale and retail sectors (4.06 vs. 4.67). Injury rates for Moroccans were higher than for both HIC and total migrant workers in all economic sectors considered. The multivariate analysis revealed an interaction effect of job tenure among both SMPC and Moroccan workers in the construction sector, while in the wholesale and retail trade sector an interaction effect of job tenure was only observed among Moroccan workers. Migrant workers have higher occupational injury rates than Italians in the engineering and construction sectors, after two years of experience within the job. Generally the risk differentials vary depending on the nationality and economic sector considered. The analysis of injury risk among migrant workers should be restricted to serious injuries; furthermore, job tenure must be taken into account.

  3. PRA and Risk Informed Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernsen, Sidney A.; Simonen, Fredric A.; Balkey, Kenneth R.

    2006-01-01

    The Boiler and Pressure Vessel Code (BPVC) of the American Society of Mechanical Engineers (ASME) has introduced a risk based approach into Section XI that covers Rules for Inservice Inspection of Nuclear Power Plant Components. The risk based approach requires application of the probabilistic risk assessments (PRA). Because no industry consensus standard existed for PRAs, ASME has developed a standard to evaluate the quality level of an available PRA needed to support a given risk based application. The paper describes the PRA standard, Section XI application of PRAs, and plans for broader applications of PRAs to other ASME nuclear codesmore » and standards. The paper addresses several specific topics of interest to Section XI. Important consideration are special methods (surrogate components) used to overcome the lack of PRA treatments of passive components in PRAs. The approach allows calculations of conditional core damage probabilities both for component failures that cause initiating events and failures in standby systems that decrease the availability of these systems. The paper relates the explicit risk based methods of the new Section XI code cases to the implicit consideration of risk used in the development of Section XI. Other topics include the needed interactions of ISI engineers, plant operating staff, PRA specialists, and members of expert panels that review the risk based programs.« less

  4. Clinical engineering and risk management in healthcare technological process using architecture framework.

    PubMed

    Signori, Marcos R; Garcia, Renato

    2010-01-01

    This paper presents a model that aids the Clinical Engineering to deal with Risk Management in the Healthcare Technological Process. The healthcare technological setting is complex and supported by three basics entities: infrastructure (IS), healthcare technology (HT), and human resource (HR). Was used an Enterprise Architecture - MODAF (Ministry of Defence Architecture Framework) - to model this process for risk management. Thus, was created a new model to contribute to the risk management in the HT process, through the Clinical Engineering viewpoint. This architecture model can support and improve the decision making process of the Clinical Engineering to the Risk Management in the Healthcare Technological process.

  5. NASA Risk Management Handbook. Version 1.0

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Benjamin, Allan; Everett, Christopher; Maggio, Gaspare; Stamatelatos, Michael; Youngblood, Robert; Guarro, Sergio; Rutledge, Peter; Sherrard, James; Smith, Curtis; hide

    2011-01-01

    The purpose of this handbook is to provide guidance for implementing the Risk Management (RM) requirements of NASA Procedural Requirements (NPR) document NPR 8000.4A, Agency Risk Management Procedural Requirements [1], with a specific focus on programs and projects, and applying to each level of the NASA organizational hierarchy as requirements flow down. This handbook supports RM application within the NASA systems engineering process, and is a complement to the guidance contained in NASA/SP-2007-6105, NASA Systems Engineering Handbook [2]. Specifically, this handbook provides guidance that is applicable to the common technical processes of Technical Risk Management and Decision Analysis established by NPR 7123.1A, NASA Systems Engineering Process and Requirements [3]. These processes are part of the \\Systems Engineering Engine. (Figure 1) that is used to drive the development of the system and associated work products to satisfy stakeholder expectations in all mission execution domains, including safety, technical, cost, and schedule. Like NPR 7123.1A, NPR 8000.4A is a discipline-oriented NPR that intersects with product-oriented NPRs such as NPR 7120.5D, NASA Space Flight Program and Project Management Requirements [4]; NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Management Requirements [5]; and NPR 7120.8, NASA Research and Technology Program and Project Management Requirements [6]. In much the same way that the NASA Systems Engineering Handbook is intended to provide guidance on the implementation of NPR 7123.1A, this handbook is intended to provide guidance on the implementation of NPR 8000.4A. 1.2 Scope and Depth This handbook provides guidance for conducting RM in the context of NASA program and project life cycles, which produce derived requirements in accordance with existing systems engineering practices that flow down through the NASA organizational hierarchy. The guidance in this handbook is not meant to be prescriptive. Instead, it is meant to be general enough, and contain a sufficient diversity of examples, to enable the reader to adapt the methods as needed to the particular risk management issues that he or she faces. The handbook highlights major issues to consider when managing programs and projects in the presence of potentially significant uncertainty, so that the user is better able to recognize and avoid pitfalls that might otherwise be experienced.

  6. Modeling of Highly Instrumented Honeywell Turbofan Engine Tested with Ice Crystal Ingestion in the NASA Propulsion System Laboratory

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.

    2016-01-01

    The Propulsion Systems Laboratory (PSL), an altitude test facility at NASA Glenn Research Center, has been used to test a highly instrumented turbine engine at simulated altitude operating conditions. This is a continuation of the PSL testing that successfully duplicated the icing events that were experienced in a previous engine (serial LF01) during flight through ice crystal clouds, which was the first turbofan engine tested in PSL. This second model of the ALF502R-5A serial number LF11 is a highly instrumented version of the previous engine. The PSL facility provides a continuous cloud of ice crystals with controlled characteristics of size and concentration, which are ingested by the engine during operation at simulated altitudes. Several of the previous operating points tested in the LF01 engine were duplicated to confirm repeatability in LF11. The instrumentation included video cameras to visually illustrate the accretion of ice in the low pressure compressor (LPC) exit guide vane region in order to confirm the ice accretion, which was suspected during the testing of the LF01. Traditional instrumentation included static pressure taps in the low pressure compressor inner and outer flow path walls, as well as total pressure and temperature rakes in the low pressure compressor region. The test data was utilized to determine the losses and blockages due to accretion in the exit guide vane region of the LPC. Multiple data points were analyzed with the Honeywell Customer Deck. A full engine roll back point was modeled with the Numerical Propulsion System Simulation (NPSS) code. The mean line compressor flow analysis code with ice crystal modeling was utilized to estimate the parameters that indicate the risk of accretion, as well as to estimate the degree of blockage and losses caused by accretion during a full engine roll back point. The analysis provided additional validation of the icing risk parameters within the LPC, as well as the creation of models for estimating the rates of blockage growth and losses.

  7. Calculation of the Actual Cost of Engine Maintenance

    DTIC Science & Technology

    2003-03-01

    Cost Estimating Integrated Tools ( ACEIT ) helps analysts store, retrieve, and analyze data; build cost models; analyze risk; time phase budgets; and...Tools ( ACEIT ).” n. pag. http://www.aceit.com/ 21 February 2003. • USAMC Logistics Support Activity (LOGSA). “Cost Analysis Strategy Assessment

  8. AUTOMOUSE: AN IMPROVEMENT TO THE MOUSE COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM OPERATIONAL MANUAL.

    EPA Science Inventory

    Under a mandate of national environmental laws, the agency strives to formulate and implement actions leading to a compatible balance between human activities and the ability of natural systems to support and nurture life. The Risk Reduction Engineering Laboratory is responsible ...

  9. Contamination Sources Effects Analysis (CSEA) - A Tool to Balance Cost/Schedule While Managing Facility Availability

    NASA Technical Reports Server (NTRS)

    Wilcox, Margaret

    2008-01-01

    A CSEA is similar to a Failure Modes Effects Analysis (FMEA). A CSEA tracks risk, deterrence, and occurrence of sources of contamination and their mitigation plans. Documentation is provided spanning mechanical and electrical assembly, precision cleaning, thermal vacuum bake-out, and thermal vacuum testing. These facilities all may play a role in contamination budgeting and reduction ultimately affecting test and flight. With a CSEA, visibility can be given to availability of these facilities, test sequencing and trade-offs. A cross-functional team including specialty engineering, contamination control, electrostatic dissipation, manufacturing, testing, and material engineering participate in an exercise that identifies contaminants and minimizes the complexity of scheduling these facilities considering their volatile schedules. Care can be taken in an efficient manner to insure correct cleaning processes are employed. The result is reduction in cycle time ("schedule hits"), reduced cost due to rework, reduced risk and improved communication and quality while achieving adherence to the Contamination Control Plan.

  10. Providing a Theoretical Basis for Nanotoxicity Risk Analysis Departing from Traditional Physiologically-Based Pharmacokinetic (PBPK) Modeling

    DTIC Science & Technology

    2010-09-01

    estimation of total exposure at any toxicological endpoint in the body. This effort is a significant contribution as it highlights future research needs...rigorous modeling of the nanoparticle transport by including physico-chemical properties of engineered particles. Similarly, toxicological dose-response...exposure risks as compared to larger sized particles of the same material. Although the toxicology of a base material may be thoroughly defined, the

  11. Towards a comprehensive and realistic risk evaluation of engineered nanomaterials in the urban water system

    NASA Astrophysics Data System (ADS)

    Duester, Lars; Burkhardt, Michael; Gutleb, Arno; Kaegi, Ralf; Macken, Ailbhe; Meermann, Björn; von der Kammer, Frank

    2014-06-01

    The European COoperation in Science and Technology (COST) Action ES1205 on the transfer of Engineered Nano materials from wastewater Treatment and stormwatEr to Rivers (ENTER) aims to create and to maintain a trans European network among scientists. This perspective article delivers a brief overview on the status quo at the beginning of the project by addressing the following aspects on engineered nano materials (ENMs) in the urban systems: i) ENMs that need to be considered on a European level; ii) uncertainties on production-volume estimations; iii) fate of selected ENMs during waste water transport and treatment; iv) analytical strategies for ENM analysis; v) ecotoxicity of ENMs, and vi) future needs. These six step stones deliver the derivation of the position of the ES1205 network at the beginning of the projects runtime, by defining six fundamental aspects that should be considered in future discussions on risk evaluation of ENMs in urban water systems.

  12. Toward a comprehensive and realistic risk evaluation of engineered nanomaterials in the urban water system

    PubMed Central

    Duester, Lars; Burkhardt, Michael; Gutleb, Arno C.; Kaegi, Ralf; Macken, Ailbhe; Meermann, Björn; von der Kammer, Frank

    2014-01-01

    The European COoperation in Science and Technology (COST) Action ES1205 on the transfer of Engineered Nano materials from wastewater Treatment and stormwatEr to Rivers (ENTER) aims to create and to maintain a trans European network among scientists. This perspective article delivers a brief overview on the status quo at the beginning of the project by addressing the following aspects on engineered nano materials (ENMs) in the urban systems: (1) ENMs that need to be considered on a European level; (2) uncertainties on production-volume estimations; (3) fate of selected ENMs during waste water transport and treatment; (4) analytical strategies for ENM analysis; (5) ecotoxicity of ENMs, and (6) future needs. These six step stones deliver the derivation of the position of the ES1205 network at the beginning of the projects runtime, by defining six fundamental aspects that should be considered in future discussions on risk evaluation of ENMs in urban water systems. PMID:25003102

  13. Bow-tie diagrams for risk management in anaesthesia.

    PubMed

    Culwick, M D; Merry, A F; Clarke, D M; Taraporewalla, K J; Gibbs, N M

    2016-11-01

    Bow-tie analysis is a risk analysis and management tool that has been readily adopted into routine practice in many high reliability industries such as engineering, aviation and emergency services. However, it has received little exposure so far in healthcare. Nevertheless, its simplicity, versatility, and pictorial display may have benefits for the analysis of a range of healthcare risks, including complex and multiple risks and their interactions. Bow-tie diagrams are a combination of a fault tree and an event tree, which when combined take the shape of a bow tie. Central to bow-tie methodology is the concept of an undesired or 'Top Event', which occurs if a hazard progresses past all prevention controls. Top Events may also occasionally occur idiosyncratically. Irrespective of the cause of a Top Event, mitigation and recovery controls may influence the outcome. Hence the relationship of hazard to outcome can be viewed in one diagram along with possible causal sequences or accident trajectories. Potential uses for bow-tie diagrams in anaesthesia risk management include improved understanding of anaesthesia hazards and risks, pre-emptive identification of absent or inadequate hazard controls, investigation of clinical incidents, teaching anaesthesia risk management, and demonstrating risk management strategies to third parties when required.

  14. Model-based engineering for laser weapons systems

    NASA Astrophysics Data System (ADS)

    Panthaki, Malcolm; Coy, Steve

    2011-10-01

    The Comet Performance Engineering Workspace is an environment that enables integrated, multidisciplinary modeling and design/simulation process automation. One of the many multi-disciplinary applications of the Comet Workspace is for the integrated Structural, Thermal, Optical Performance (STOP) analysis of complex, multi-disciplinary space systems containing Electro-Optical (EO) sensors such as those which are designed and developed by and for NASA and the Department of Defense. The CometTM software is currently able to integrate performance simulation data and processes from a wide range of 3-D CAD and analysis software programs including CODE VTM from Optical Research Associates and SigFitTM from Sigmadyne Inc. which are used to simulate the optics performance of EO sensor systems in space-borne applications. Over the past year, Comet Solutions has been working with MZA Associates of Albuquerque, NM, under a contract with the Air Force Research Laboratories. This funded effort is a "risk reduction effort", to help determine whether the combination of Comet and WaveTrainTM, a wave optics systems engineering analysis environment developed and maintained by MZA Associates and used by the Air Force Research Laboratory, will result in an effective Model-Based Engineering (MBE) environment for the analysis and design of laser weapons systems. This paper will review the results of this effort and future steps.

  15. Comparison of ergonomic risk assessment outputs from rapid entire body assessment and quick exposure check in an engine oil company.

    PubMed

    Motamedzade, Majid; Ashuri, Mohammad Reza; Golmohammadi, Rostam; Mahjub, Hossein

    2011-06-13

    During the last decades, to assess the risk factors of work-related musculoskeletal disorders (WMSDs), enormous observational methods have been developed. Rapid Entire Body Assessment (REBA) and Quick Exposure Check (QEC) are two general methods in this field. This study aimed to compare ergonomic risk assessment outputs from QEC and REBA in terms of agreement in distribution of postural loading scores based on analysis of working postures. This cross-sectional study was conducted in an engine oil company in which 40 jobs were studied. All jobs were observed by a trained occupational health practitioner. Job information was collected to ensure the completion of ergonomic risk assessment tools, including QEC, and REBA. The result revealed that there was a significant correlation between final scores (r=0.731) and the action levels (r =0.893) of two applied methods. Comparison between the action levels and final scores of two methods showed that there was no significant difference among working departments. Most of studied postures acquired low and moderate risk level in QEC assessment (low risk=20%, moderate risk=50% and High risk=30%) and in REBA assessment (low risk=15%, moderate risk=60% and high risk=25%). There is a significant correlation between two methods. They have a strong correlation in identifying risky jobs, and determining the potential risk for incidence of WMSDs. Therefore, there is possibility for researchers to apply interchangeably both methods, for postural risk assessment in appropriate working environments.

  16. Risk Identification and Visualization in a Concurrent Engineering Team Environment

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Chattopadhyay, Debarati; Shishko, Robert

    2010-01-01

    Incorporating risk assessment into the dynamic environment of a concurrent engineering team requires rapid response and adaptation. Generating consistent risk lists with inputs from all the relevant subsystems and presenting the results clearly to the stakeholders in a concurrent engineering environment is difficult because of the speed with which decisions are made. In this paper we describe the various approaches and techniques that have been explored for the point designs of JPL's Team X and the Trade Space Studies of the Rapid Mission Architecture Team. The paper will also focus on the issues of the misuse of categorical and ordinal data that keep arising within current engineering risk approaches and also in the applied risk literature.

  17. Heat Transfer and Thermal Stability Research for Advanced Hydrocarbon Fuel Technologies

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Stiegemeier, Benjamin

    2005-01-01

    In recent years there has been increased interest in the development of a new generation of high performance boost rocket engines. These efforts, which will represent a substantial advancement in boost engine technology over that developed for the Space Shuttle Main Engines in the early 1970s, are being pursued both at NASA and the United States Air Force. NASA, under its Space Launch Initiative s Next Generation Launch Technology Program, is investigating the feasibility of developing a highly reliable, long-life, liquid oxygen/kerosene (RP-1) rocket engine for launch vehicles. One of the top technical risks to any engine program employing hydrocarbon fuels is the potential for fuel thermal stability and material compatibility problems to occur under the high-pressure, high-temperature conditions required for regenerative fuel cooling of the engine combustion chamber and nozzle. Decreased heat transfer due to carbon deposits forming on wetted fuel components, corrosion of materials common in engine construction (copper based alloys), and corrosion induced pressure drop increases have all been observed in laboratory tests simulating rocket engine cooling channels. To mitigate these risks, the knowledge of how these fuels behave in high temperature environments must be obtained. Currently, due to the complexity of the physical and chemical process occurring, the only way to accomplish this is empirically. Heated tube testing is a well-established method of experimentally determining the thermal stability and heat transfer characteristics of hydrocarbon fuels. The popularity of this method stems from the low cost incurred in testing when compared to hot fire engine tests, the ability to have greater control over experimental conditions, and the accessibility of the test section, facilitating easy instrumentation. These benefits make heated tube testing the best alternative to hot fire engine testing for thermal stability and heat transfer research. This investigation used the Heated Tube Facility at the NASA Glenn Research Center to perform a thermal stability and heat transfer characterization of RP-1 in an environment simulating that of a high chamber pressure, regenerative cooled rocket engine. The first step in the research was to investigate the carbon deposition process of previous heated tube experiments by performing scanning electron microscopic analysis in conjunction with energy dispersive spectroscopy on the tube sections. This analysis gave insight into the carbon deposition process and the effect that test conditions played in the formation of deleterious coke. Furthermore, several different formations were observed and noted. One other crucial finding of this investigation was that in sulfur containing hydrocarbon fuels, the interaction of the sulfur components with copper based wall materials presented a significant corrosion problem. This problem in many cases was more life limiting than those posed by the carbon deposition process. The results of this microscopic analysis was detailed and presented at the December 2003 JANNAF Air-Breathing Propulsion Meeting as a Materials Compatibility and Thermal Stability Analysis of common Hydrocarbon Fuels (reference 1).

  18. Does Exposure to Agricultural Chemicals Increase the Risk of Prostate Cancer among Farmers?

    PubMed Central

    Parent, Marie-Élise; Désy, Marie; Siemiatycki, Jack

    2009-01-01

    Several studies suggest that farmers may be at increased risk of prostate cancer. The present analysis, based on a large population-based case-control study conducted among men in the Montreal area in the early 1980’s, aim at identifying occupational chemicals which may be responsible for such increases. The original study enrolled 449 prostate cancer cases, nearly 4,000 patients with other cancers, as well as 533 population controls. Subjects were interviewed about their occupation histories, and a team of industrial hygienists assigned their past exposures using a checklist of some 300 chemicals. The present analysis was restricted to a study base of men who had worked as farmers earlier in their lives. There were a total of 49 men with prostate cancers, 127 with other cancers and 56 population controls. We created a pool of 183 controls combining the patients with cancers at sites other than the prostate and the population controls. We then estimated the odds ratio for prostate cancer associated with exposure to each of 10 agricultural chemicals, i.e., pesticides, arsenic compounds, acetic acid, gasoline engine emissions, diesel engine emissions, polycyclic aromatic hydrocarbons from petroleum, lubricating oils and greases, alkanes with ≥18 carbons, solvents, and mononuclear aromatic hydrocarbons. Based on a model adjusting for age, ethnicity, education, and respondent status, there was evidence of a two-fold excess risk of prostate cancer among farmers with substantial exposure to pesticides [odds ratio (OR)=2.3, 95% confidence interval (CI) 1.1–5.1], as compared to unexposed farmers. There was some suggestion, based on few subjects, of increased risks among farmers ever exposed to diesel engine emissions (OR=5.7, 95% CI 1.2–26.5). The results for pesticides are particularly noteworthy in the light of findings from previous studies. Suggestions of trends for elevated risks were noted with other agricultural chemicals, but these are largely novel and need further confirmation in larger samples. PMID:19753293

  19. Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document

    DTIC Science & Technology

    1999-04-01

    management is the process of accessing and quantifying risk , and establishing an acceptable level of risk for the organization. Managing risk is an...Process of assessing and quantifying risk and establishing acceptable level of risk for the organization. [IEEE 13335-1:1996] Security Engineering

  20. Time Factor in the Theory of Anthropogenic Risk Prediction in Complex Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Ostreikovsky, V. A.; Shevchenko, Ye N.; Yurkov, N. K.; Kochegarov, I. I.; Grishko, A. K.

    2018-01-01

    The article overviews the anthropogenic risk models that take into consideration the development of different factors in time that influence the complex system. Three classes of mathematical models have been analyzed for the use in assessing the anthropogenic risk of complex dynamic systems. These models take into consideration time factor in determining the prospect of safety change of critical systems. The originality of the study is in the analysis of five time postulates in the theory of anthropogenic risk and the safety of highly important objects. It has to be stressed that the given postulates are still rarely used in practical assessment of equipment service life of critically important systems. That is why, the results of study presented in the article can be used in safety engineering and analysis of critically important complex technical systems.

  1. Microplastic Exposure Assessment in Aquatic Environments: Learning from Similarities and Differences to Engineered Nanoparticles.

    PubMed

    Hüffer, Thorsten; Praetorius, Antonia; Wagner, Stephan; von der Kammer, Frank; Hofmann, Thilo

    2017-03-07

    Microplastics (MPs) have been identified as contaminants of emerging concern in aquatic environments and research into their behavior and fate has been sharply increasing in recent years. Nevertheless, significant gaps remain in our understanding of several crucial aspects of MP exposure and risk assessment, including the quantification of emissions, dominant fate processes, types of analytical tools required for characterization and monitoring, and adequate laboratory protocols for analysis and hazard testing. This Feature aims at identifying transferrable knowledge and experience from engineered nanoparticle (ENP) exposure assessment. This is achieved by comparing ENP and MPs based on their similarities as particulate contaminants, whereas critically discussing specific differences. We also highlight the most pressing research priorities to support an efficient development of tools and methods for MPs environmental risk assessment.

  2. Effects of an Advanced Reactor’s Design, Use of Automation, and Mission on Human Operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; Johanna H. Oxstrand

    The roles, functions, and tasks of the human operator in existing light water nuclear power plants (NPPs) are based on sound nuclear and human factors engineering (HFE) principles, are well defined by the plant’s conduct of operations, and have been validated by years of operating experience. However, advanced NPPs whose engineering designs differ from existing light-water reactors (LWRs) will impose changes on the roles, functions, and tasks of the human operators. The plans to increase the use of automation, reduce staffing levels, and add to the mission of these advanced NPPs will also affect the operator’s roles, functions, and tasks.more » We assert that these factors, which do not appear to have received a lot of attention by the design engineers of advanced NPPs relative to the attention given to conceptual design of these reactors, can have significant risk implications for the operators and overall plant safety if not mitigated appropriately. This paper presents a high-level analysis of a specific advanced NPP and how its engineered design, its plan to use greater levels of automation, and its expanded mission have risk significant implications on operator performance and overall plant safety.« less

  3. The Shuttle processing contractors (SPC) reliability program at the Kennedy Space Center - The real world

    NASA Astrophysics Data System (ADS)

    McCrea, Terry

    The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.

  4. Risky Business

    NASA Technical Reports Server (NTRS)

    Yarbrough, Katherine

    2015-01-01

    During my internship I worked on two major projects, recommending improvements for the Center's Risk Management Workshop and helping with the strategic planning efforts for Safety and Mission Assurance (S&MA). The risk management improvements is the key project I worked on this semester through my internship, while the strategic planning is the secondary assignment. S&MA Business Office covers both aspects in its delegation, getting both spans some of the work done in the office. A risk is a future event with a negative consequence that has some probability of occurring. Safety and Mission Assurance identifies, analyzes, plans, and tracks risk. The directorate offers the Center a Risk Management Workshop, and part of the ongoing efforts of S&MA is to make continuous improvements to the RM Workshop. By using the Project Management Institute's (PMI) Standard for Risk Management, I performed a gap analysis to make improvements for our materials. I benchmarked the PMI's Risk Management Standard, compared our Risk Management Workshop materials to PMI's standard, and identified any gaps in our material. My major findings were presented to the Business Office of S&MA for a decision on whether or not to incorporate the improvements. These suggestions were made by attending JSC working group meetings, Health, Safety and Environment (HSE) panel reviews and various risk review meetings. The improvements provide better understanding of risk management processes and enhanced risk tracking knowledge and skills. Risk management is an integral part of any engineering discipline, getting exposed to this section of engineering will greatly help shape my career in the future. Johnson Space Center is a world leader in risk management processes; learning risk management here gives me a huge advantage over my peers, as well as understanding decision making in the context of risk management will help me to be a well-rounded engineer. Strategic planning is an area I had not previously studied. Helping with the strategic planning efforts in S&MA has taught me how organizations think and function as a whole. S&MA is adopting a balanced scorecard approach to strategic planning. As part of this planning method strategic themes, objectives, and initiatives are formed. I attended strategic theme team workshops that formed the strategy map for the directorate and gave shape to the plan. Also during these workshops the objectives were discussed and built. Learning the process for strategic planning has helped me better understand how organizations and businesses function, which also helps me to be a more effective employee. Other assignments I had during my internship included completing the Safety and Mission Assurance Technical Excellent Program (STEP) Level 1, as well as doing a two week rotation through the Space Exploration division in S&MA, specifically working with a thermal protection systems (TPS) engineer. While working there, I learned about the Orion capsule and the SpaceX Dragon cargo capsule. I attended meetings to prepare the engineers for the upcoming Critical Design Reviews for both capsules and reviewed test data. Learning risk management, strategic planning, and working in the Space Exploration division has taught me about many aspects of S&MA. My internship at NASA has given me new experiences and taught me numerous subjects that I would have otherwise not learned. This opportunity has expanded my educational horizons and is helping me to become a more useful engineer and employee.

  5. Safer Liquid Natural Gas

    NASA Technical Reports Server (NTRS)

    1976-01-01

    After the disaster of Staten Island in 1973 where 40 people were killed repairing a liquid natural gas storage tank, the New York Fire Commissioner requested NASA's help in drawing up a comprehensive plan to cover the design, construction, and operation of liquid natural gas facilities. Two programs are underway. The first transfers comprehensive risk management techniques and procedures which take the form of an instruction document that includes determining liquid-gas risks through engineering analysis and tests, controlling these risks by setting up redundant fail safe techniques, and establishing criteria calling for decisions that eliminate or accept certain risks. The second program prepares a liquid gas safety manual (the first of its kind).

  6. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  7. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  8. Resilient Propulsion Control Research for the NASA Integrated Resilient Aircraft Control (IRAC) Project

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Litt, Jonathan S.

    2007-01-01

    Gas turbine engines are designed to provide sufficient safety margins to guarantee robust operation with an exceptionally long life. However, engine performance requirements may be drastically altered during abnormal flight conditions or emergency maneuvers. In some situations, the conservative design of the engine control system may not be in the best interest of overall aircraft safety; it may be advantageous to "sacrifice" the engine to "save" the aircraft. Motivated by this opportunity, the NASA Aviation Safety Program is conducting resilient propulsion research aimed at developing adaptive engine control methodologies to operate the engine beyond the normal domain for emergency operations to maximize the possibility of safely landing the damaged aircraft. Previous research studies and field incident reports show that the propulsion system can be an effective tool to help control and eventually land a damaged aircraft. Building upon the flight-proven Propulsion Controlled Aircraft (PCA) experience, this area of research will focus on how engine control systems can improve aircraft safe-landing probabilities under adverse conditions. This paper describes the proposed research topics in Engine System Requirements, Engine Modeling and Simulation, Engine Enhancement Research, Operational Risk Analysis and Modeling, and Integrated Flight and Propulsion Controller Designs that support the overall goal.

  9. Diesel engine exhaust and lung cancer risks - evaluation of the meta-analysis by Vermeulen et al. 2014.

    PubMed

    Morfeld, Peter; Spallek, Michael

    2015-01-01

    Vermeulen et al. 2014 published a meta-regression analysis of three relevant epidemiological US studies (Steenland et al. 1998, Garshick et al. 2012, Silverman et al. 2012) that estimated the association between occupational diesel engine exhaust (DEE) exposure and lung cancer mortality. The DEE exposure was measured as cumulative exposure to estimated respirable elemental carbon in μg/m(3)-years. Vermeulen et al. 2014 found a statistically significant dose-response association and described elevated lung cancer risks even at very low exposures. We performed an extended re-analysis using different modelling approaches (fixed and random effects regression analyses, Greenland/Longnecker method) and explored the impact of varying input data (modified coefficients of Garshick et al. 2012, results from Crump et al. 2015 replacing Silverman et al. 2012, modified analysis of Moehner et al. 2013). We reproduced the individual and main meta-analytical results of Vermeulen et al. 2014. However, our analysis demonstrated a heterogeneity of the baseline relative risk levels between the three studies. This heterogeneity was reduced after the coefficients of Garshick et al. 2012 were modified while the dose coefficient dropped by an order of magnitude for this study and was far from being significant (P = 0.6). A (non-significant) threshold estimate for the cumulative DEE exposure was found at 150 μg/m(3)-years when extending the meta-analyses of the three studies by hockey-stick regression modelling (including the modified coefficients for Garshick et al. 2012). The data used by Vermeulen and colleagues led to the highest relative risk estimate across all sensitivity analyses performed. The lowest relative risk estimate was found after exclusion of the explorative study by Steenland et al. 1998 in a meta-regression analysis of Garshick et al. 2012 (modified), Silverman et al. 2012 (modified according to Crump et al. 2015) and Möhner et al. 2013. The meta-coefficient was estimated to be about 10-20 % of the main effect estimate in Vermeulen et al. 2014 in this analysis. The findings of Vermeulen et al. 2014 should not be used without reservations in any risk assessments. This is particularly true for the low end of the exposure scale.

  10. Why risk is not variance: an expository note.

    PubMed

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  11. Diesel engine exhaust and lung cancer mortality: time-related factors in exposure and risk.

    PubMed

    Moolgavkar, Suresh H; Chang, Ellen T; Luebeck, Georg; Lau, Edmund C; Watson, Heather N; Crump, Kenny S; Boffetta, Paolo; McClellan, Roger

    2015-04-01

    To develop a quantitative exposure-response relationship between concentrations and durations of inhaled diesel engine exhaust (DEE) and increases in lung cancer risks, we examined the role of temporal factors in modifying the estimated effects of exposure to DEE on lung cancer mortality and characterized risk by mine type in the Diesel Exhaust in Miners Study (DEMS) cohort, which followed 12,315 workers through December 1997. We analyzed the data using parametric functions based on concepts of multistage carcinogenesis to directly estimate the hazard functions associated with estimated exposure to a surrogate marker of DEE, respirable elemental carbon (REC). The REC-associated risk of lung cancer mortality in DEMS is driven by increased risk in only one of four mine types (limestone), with statistically significant heterogeneity by mine type and no significant exposure-response relationship after removal of the limestone mine workers. Temporal factors, such as duration of exposure, play an important role in determining the risk of lung cancer mortality following exposure to REC, and the relative risk declines after exposure to REC stops. There is evidence of effect modification of risk by attained age. The modifying impact of temporal factors and effect modification by age should be addressed in any quantitative risk assessment (QRA) of DEE. Until there is a better understanding of why the risk appears to be confined to a single mine type, data from DEMS cannot reliably be used for QRA. © 2015 Society for Risk Analysis.

  12. C-17 Centerlining - Analysis of Paratrooper Trajectory

    DTIC Science & Technology

    2005-06-01

    Linear Regression Models. 4th ed. McGraw Hill, 2004. Kuntavanish, Mark. Program Engineer for C-17 System Program Office. Briefing to US Army...Airdrop Risk Assessment Using Bootstrap Sampling, MS AFIT Thesis AFIT/GOR/ENS/96D-01, Dec 1996. Kutner, Michael, C. Nachtsheim, and J. Neter. Applied

  13. Systems engineering approach to environmental risk management: A case study of depleted uranium at test area C-64, Eglin Air Force Base, Florida. Master`s thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, C.M.; Fortmann, K.M.; Hill, S.W.

    1994-12-01

    Environmental restoration is an area of concern in an environmentally conscious world. Much effort is required to clean up the environment and promote environmentally sound methods for managing current land use. In light of the public consciousness with the latter topic, the United States Air Force must also take an active role in addressing these environmental issues with respect to current and future USAF base land use. This thesis uses the systems engineering technique to assess human health risks and to evaluate risk management options with respect to depleted uranium contamination in the sampled region of Test Area (TA) C-64more » at Eglin Air Force Base (AFB). The research combines the disciplines of environmental data collection, DU soil concentration distribution modeling, ground water modeling, particle resuspension modeling, exposure assessment, health hazard assessment, and uncertainty analysis to characterize the test area. These disciplines are required to quantify current and future health risks, as well as to recommend cost effective ways to increase confidence in health risk assessment and remediation options.« less

  14. NASA System Safety Handbook. Volume 2: System Safety Concepts, Guidelines, and Implementation Examples

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Benjamin, Allan; Everett, Christopher; Feather, Martin; Rutledge, Peter; Sen, Dev; Youngblood, Robert

    2015-01-01

    This is the second of two volumes that collectively comprise the NASA System Safety Handbook. Volume 1 (NASASP-210-580) was prepared for the purpose of presenting the overall framework for System Safety and for providing the general concepts needed to implement the framework. Volume 2 provides guidance for implementing these concepts as an integral part of systems engineering and risk management. This guidance addresses the following functional areas: 1.The development of objectives that collectively define adequate safety for a system, and the safety requirements derived from these objectives that are levied on the system. 2.The conduct of system safety activities, performed to meet the safety requirements, with specific emphasis on the conduct of integrated safety analysis (ISA) as a fundamental means by which systems engineering and risk management decisions are risk-informed. 3.The development of a risk-informed safety case (RISC) at major milestone reviews to argue that the systems safety objectives are satisfied (and therefore that the system is adequately safe). 4.The evaluation of the RISC (including supporting evidence) using a defined set of evaluation criteria, to assess the veracity of the claims made therein in order to support risk acceptance decisions.

  15. A multicriteria decision analysis model and risk assessment framework for carbon capture and storage.

    PubMed

    Humphries Choptiany, John Michael; Pelot, Ronald

    2014-09-01

    Multicriteria decision analysis (MCDA) has been applied to various energy problems to incorporate a variety of qualitative and quantitative criteria, usually spanning environmental, social, engineering, and economic fields. MCDA and associated methods such as life-cycle assessments and cost-benefit analysis can also include risk analysis to address uncertainties in criteria estimates. One technology now being assessed to help mitigate climate change is carbon capture and storage (CCS). CCS is a new process that captures CO2 emissions from fossil-fueled power plants and injects them into geological reservoirs for storage. It presents a unique challenge to decisionmakers (DMs) due to its technical complexity, range of environmental, social, and economic impacts, variety of stakeholders, and long time spans. The authors have developed a risk assessment model using a MCDA approach for CCS decisions such as selecting between CO2 storage locations and choosing among different mitigation actions for reducing risks. The model includes uncertainty measures for several factors, utility curve representations of all variables, Monte Carlo simulation, and sensitivity analysis. This article uses a CCS scenario example to demonstrate the development and application of the model based on data derived from published articles and publicly available sources. The model allows high-level DMs to better understand project risks and the tradeoffs inherent in modern, complex energy decisions. © 2014 Society for Risk Analysis.

  16. Engineering design: A powerful influence on the business success on manufacturing industry

    NASA Astrophysics Data System (ADS)

    Coplin, John F.

    1990-08-01

    Engineering design, one of the most powerful forces in producing a package which matches market need, is discussed. It is essentially a detailed planning process backed by analysis and demonstration. The need for innovation to achieve competitive edge and profitability is considered. Innovation contains risk which must be controlled before substantial investment is made. The high rate of change of technology gives rise to the need for good training and retraining. Benefits which offsets costs at the time of occurring that cost are reached.

  17. The social nature of engineering and its implications for risk taking.

    PubMed

    Ross, Allison; Athanassoulis, Nafsika

    2010-03-01

    Making decisions with an, often significant, element of risk seems to be an integral part of many of the projects of the diverse profession of engineering. Whether it be decisions about the design of products, manufacturing processes, public works, or developing technological solutions to environmental, social and global problems, risk taking seems inherent to the profession. Despite this, little attention has been paid to the topic and specifically to how our understanding of engineering as a distinctive profession might affect how we should make decisions under risk. This paper seeks to remedy this, firstly by offering a nuanced account of risk and then by considering how specific claims about our understanding of engineering as a social profession, with corresponding social values and obligations, should inform how we make decisions about risk in this context.

  18. Estimated Flood Discharges and Map of Flood-Inundated Areas for Omaha Creek, near Homer, Nebraska, 2005

    USGS Publications Warehouse

    Dietsch, Benjamin J.; Wilson, Richard C.; Strauch, Kellan R.

    2008-01-01

    Repeated flooding of Omaha Creek has caused damage in the Village of Homer. Long-term degradation and bridge scouring have changed substantially the channel characteristics of Omaha Creek. Flood-plain managers, planners, homeowners, and others rely on maps to identify areas at risk of being inundated. To identify areas at risk for inundation by a flood having a 1-percent annual probability, maps were created using topographic data and water-surface elevations resulting from hydrologic and hydraulic analyses. The hydrologic analysis for the Omaha Creek study area was performed using historical peak flows obtained from the U.S. Geological Survey streamflow gage (station number 06601000). Flood frequency and magnitude were estimated using the PEAKFQ Log-Pearson Type III analysis software. The U.S. Army Corps of Engineers' Hydrologic Engineering Center River Analysis System, version 3.1.3, software was used to simulate the water-surface elevation for flood events. The calibrated model was used to compute streamflow-gage stages and inundation elevations for the discharges corresponding to floods of selected probabilities. Results of the hydrologic and hydraulic analyses indicated that flood inundation elevations are substantially lower than from a previous study.

  19. Improving tsunami resiliency: California's Tsunami Policy Working Group

    USGS Publications Warehouse

    Real, Charles R.; Johnson, Laurie; Jones, Lucile M.; Ross, Stephanie L.; Kontar, Y.A.; Santiago-Fandiño, V.; Takahashi, T.

    2014-01-01

    California has established a Tsunami Policy Working Group to facilitate development of policy recommendations for tsunami hazard mitigation. The Tsunami Policy Working Group brings together government and industry specialists from diverse fields including tsunami, seismic, and flood hazards, local and regional planning, structural engineering, natural hazard policy, and coastal engineering. The group is acting on findings from two parallel efforts: The USGS SAFRR Tsunami Scenario project, a comprehensive impact analysis of a large credible tsunami originating from an M 9.1 earthquake in the Aleutian Islands Subduction Zone striking California’s coastline, and the State’s Tsunami Preparedness and Hazard Mitigation Program. The unique dual-track approach provides a comprehensive assessment of vulnerability and risk within which the policy group can identify gaps and issues in current tsunami hazard mitigation and risk reduction, make recommendations that will help eliminate these impediments, and provide advice that will assist development and implementation of effective tsunami hazard risk communication products to improve community resiliency.

  20. Propellant injection systems and processes

    NASA Technical Reports Server (NTRS)

    Ito, Jackson I.

    1995-01-01

    The previous 'Art of Injector Design' is maturing and merging with the more systematic 'Science of Combustion Device Analysis.' This technology can be based upon observation, correlation, experimentation and ultimately analytical modeling based upon basic engineering principles. This methodology is more systematic and far superior to the historical injector design process of 'Trial and Error' or blindly 'Copying Past Successes.' The benefit of such an approach is to be able to rank candidate design concepts for relative probability of success or technical risk in all the important combustion device design requirements and combustion process development risk categories before committing to an engine development program. Even if a single analytical design concept cannot be developed to predict satisfying all requirements simultaneously, a series of risk mitigation key enabling technologies can be identified for early resolution. Lower cost subscale or laboratory experimentation to demonstrate proof of principle, critical instrumentation requirements, and design discriminating test plans can be developed based on the physical insight provided by these analyses.

  1. Neutronics Design of a Thorium-Fueled Fission Blanket for LIFE (Laser Inertial Fusion-based Energy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, J; Abbott, R; Fratoni, M

    The Laser Inertial Fusion-based Energy (LIFE) project at LLNL includes development of hybrid fusion-fission systems for energy generation. These hybrid LIFE engines use high-energy neutrons from laser-based inertial confinement fusion to drive a subcritical blanket of fission fuel that surrounds the fusion chamber. The fission blanket contains TRISO fuel particles packed into pebbles in a flowing bed geometry cooled by a molten salt (flibe). LIFE engines using a thorium fuel cycle provide potential improvements in overall fuel cycle performance and resource utilization compared to using depleted uranium (DU) and may minimize waste repository and proliferation concerns. A preliminary engine designmore » with an initial loading of 40 metric tons of thorium can maintain a power level of 2000 MW{sub th} for about 55 years, at which point the fuel reaches an average burnup level of about 75% FIMA. Acceptable performance was achieved without using any zero-flux environment 'cooling periods' to allow {sup 233}Pa to decay to {sup 233}U; thorium undergoes constant irradiation in this LIFE engine design to minimize proliferation risks and fuel inventory. Vast reductions in end-of-life (EOL) transuranic (TRU) inventories compared to those produced by a similar uranium system suggest reduced proliferation risks. Decay heat generation in discharge fuel appears lower for a thorium LIFE engine than a DU engine but differences in radioactive ingestion hazard are less conclusive. Future efforts on development of thorium-fueled LIFE fission blankets engine development will include design optimization, fuel performance analysis work, and further waste disposal and nonproliferation analyses.« less

  2. Determination of viable legionellae in engineered water systems: Do we find what we are looking for?

    PubMed Central

    Kirschner, Alexander K.T.

    2016-01-01

    In developed countries, legionellae are one of the most important water-based bacterial pathogens caused by management failure of engineered water systems. For routine surveillance of legionellae in engineered water systems and outbreak investigations, cultivation-based standard techniques are currently applied. However, in many cases culture-negative results are obtained despite the presence of viable legionellae, and clinical cases of legionellosis cannot be traced back to their respective contaminated water source. Among the various explanations for these discrepancies, the presence of viable but non-culturable (VBNC) Legionella cells has received increased attention in recent discussions and scientific literature. Alternative culture-independent methods to detect and quantify legionellae have been proposed in order to complement or even substitute the culture method in the future. Such methods should detect VBNC Legionella cells and provide a more comprehensive picture of the presence of legionellae in engineered water systems. However, it is still unclear whether and to what extent these VBNC legionellae are hazardous to human health. Current risk assessment models to predict the risk of legionellosis from Legionella concentrations in the investigated water systems contain many uncertainties and are mainly based on culture-based enumeration. If VBNC legionellae should be considered in future standard analysis, quantitative risk assessment models including VBNC legionellae must be proven to result in better estimates of human health risk than models based on cultivation alone. This review critically evaluates current methods to determine legionellae in the VBNC state, their potential to complement the standard culture-based method in the near future, and summarizes current knowledge on the threat that VBNC legionellae may pose to human health. PMID:26928563

  3. Systems Engineering Approach to Technology Integration for NASA's 2nd Generation Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl

    2002-01-01

    The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd-generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.

  4. Systems Engineering Approach to Technology Integration for NASA's 2nd Generation Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl

    2002-01-01

    The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.

  5. Determination of viable legionellae in engineered water systems: Do we find what we are looking for?

    PubMed

    Kirschner, Alexander K T

    2016-04-15

    In developed countries, legionellae are one of the most important water-based bacterial pathogens caused by management failure of engineered water systems. For routine surveillance of legionellae in engineered water systems and outbreak investigations, cultivation-based standard techniques are currently applied. However, in many cases culture-negative results are obtained despite the presence of viable legionellae, and clinical cases of legionellosis cannot be traced back to their respective contaminated water source. Among the various explanations for these discrepancies, the presence of viable but non-culturable (VBNC) Legionella cells has received increased attention in recent discussions and scientific literature. Alternative culture-independent methods to detect and quantify legionellae have been proposed in order to complement or even substitute the culture method in the future. Such methods should detect VBNC Legionella cells and provide a more comprehensive picture of the presence of legionellae in engineered water systems. However, it is still unclear whether and to what extent these VBNC legionellae are hazardous to human health. Current risk assessment models to predict the risk of legionellosis from Legionella concentrations in the investigated water systems contain many uncertainties and are mainly based on culture-based enumeration. If VBNC legionellae should be considered in future standard analysis, quantitative risk assessment models including VBNC legionellae must be proven to result in better estimates of human health risk than models based on cultivation alone. This review critically evaluates current methods to determine legionellae in the VBNC state, their potential to complement the standard culture-based method in the near future, and summarizes current knowledge on the threat that VBNC legionellae may pose to human health. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  6. Conceptual modeling for identification of worst case conditions in environmental risk assessment of nanomaterials using nZVI and C60 as case studies.

    PubMed

    Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders

    2011-09-01

    Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Factors associated with health-related quality of life among operating engineers.

    PubMed

    Choi, Seung Hee; Redman, Richard W; Terrell, Jeffrey E; Pohl, Joanne M; Duffy, Sonia A

    2012-11-01

    Because health-related quality of life among blue-collar workers has not been well studied, the purpose of this study was to determine factors associated with health-related quality of life among Operating Engineers. With cross-sectional data from a convenience sample of 498 Operating Engineers, personal and health behavioral factors associated with health-related quality of life were examined. Multivariate linear regression analysis revealed that personal factors (older age, being married, more medical comorbidities, and depression) and behavioral factors (smoking, low fruit and vegetable intake, low physical activity, high body mass index, and low sleep quality) were associated with poor health-related quality of life. Operating Engineers are at risk for poor health-related quality of life. Underlying medical comorbidities and depression should be well managed. Worksite wellness programs addressing poor health behaviors may be beneficial.

  8. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    PubMed

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  9. High Stability Engine Control (HISTEC): Flight Demonstration Results

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Southwick, Robert D.; Gallops, George W.; Orme, John S.

    1998-01-01

    Future aircraft turbine engines, both commercial and military, must be able to accommodate expected increased levels of steady-state and dynamic engine-face distortion. The current approach of incorporating sufficient design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The High Stability Engine Control (HISTEC) program has developed technologies for an advanced, integrated engine control system that uses measurement- based estimates of distortion to enhance engine stability. The resulting distortion tolerant control reduces the required design stall margin, with a corresponding increase in performance and/or decrease in fuel burn. The HISTEC concept was successfully flight demonstrated on the F-15 ACTIVE aircraft during the summer of 1997. The flight demonstration was planned and carried out in two parts, the first to show distortion estimation, and the second to show distortion accommodation. Post-flight analysis shows that the HISTEC technologies are able to successfully estimate and accommodate distortion, transiently setting the stall margin requirement on-line and in real-time. Flight demonstration of the HISTEC technologies has significantly reduced the risk of transitioning the technology to tactical and commercial engines.

  10. Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications

    DTIC Science & Technology

    1992-09-01

    STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach

  11. Failure Engineering Study and Accelerated Stress Test Results for the Mars Global Surveyor Spacecraft's Power Shunt Assemblies

    NASA Technical Reports Server (NTRS)

    Gibbel, Mark; Larson, Timothy

    2000-01-01

    An Engineering-of-Failure approach to designing and executing an accelerated product qualification test was performed to support a risk assessment of a "work-around" necessitated by an on-orbit failure of another piece of hardware on the Mars Global Surveyor spacecraft. The proposed work-around involved exceeding the previous qualification experience both in terms of extreme cold exposure level and in terms of demonstrated low cycle fatigue life for the power shunt assemblies. An analysis was performed to identify potential failure sites, modes and associated failure mechanisms consistent with the new use conditions. A test was then designed and executed which accelerated the failure mechanisms identified by analysis. Verification of the resulting failure mechanism concluded the effort.

  12. Environmental risk assessment of a genetically-engineered microorganism: Erwinia carotovora

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orvos, D.R.

    1989-01-01

    Environmental use of genetically-engineered microorganisms (GEMs) has raised concerns over potential ecological impact. Development of microcosm systems useful in preliminary testing for risk assessment will provide useful information for predicting potential structural, functional, and genetic effects of GEM release. This study was executed to develop techniques that may be useful in risk assessment and microbial ecology, to ascertain which parameters are useful in determining risk and to predict risk from releasing an engineered strain of Erwinia carotovora. A terrestrial microcosm system for use in GEM risk assessment studies was developed for use in assessing alterations of microbial structure and functionmore » that may be caused by introducing the engineered strain of E. carotovora. This strain is being developed for use as a biological control agent for plant soft rot. Parameters that were monitored included survival and intraspecific competition of E. carotovora, structural effects upon both total bacterial populations and numbers of selected bacterial genera, effects upon activities of dehydrogenase and alkaline phosphatase, effects upon soil nutrients, and potential for gene transfer into or out of the engineered strain.« less

  13. Continuous Risk Management: A NASA Program Initiative

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore F.; Rosenberg, Linda

    1999-01-01

    NPG 7120.5A, "NASA Program and Project Management Processes and Requirements" enacted in April, 1998, requires that "The program or project manager shall apply risk management principles..." The Software Assurance Technology Center (SATC) at NASA GSFC has been tasked with the responsibility for developing and teaching a systems level course for risk management that provides information on how to comply with this edict. The course was developed in conjunction with the Software Engineering Institute at Carnegie Mellon University, then tailored to the NASA systems community. This presentation will briefly discuss the six functions for risk management: (1) Identify the risks in a specific format; (2) Analyze the risk probability, impact/severity, and timeframe; (3) Plan the approach; (4) Track the risk through data compilation and analysis; (5) Control and monitor the risk; (6) Communicate and document the process and decisions.

  14. Quantitative risk analysis of oil storage facilities in seismic areas.

    PubMed

    Fabbrocino, Giovanni; Iervolino, Iunio; Orlando, Francesca; Salzano, Ernesto

    2005-08-31

    Quantitative risk analysis (QRA) of industrial facilities has to take into account multiple hazards threatening critical equipment. Nevertheless, engineering procedures able to evaluate quantitatively the effect of seismic action are not well established. Indeed, relevant industrial accidents may be triggered by loss of containment following ground shaking or other relevant natural hazards, either directly or through cascade effects ('domino effects'). The issue of integrating structural seismic risk into quantitative probabilistic seismic risk analysis (QpsRA) is addressed in this paper by a representative study case regarding an oil storage plant with a number of atmospheric steel tanks containing flammable substances. Empirical seismic fragility curves and probit functions, properly defined both for building-like and non building-like industrial components, have been crossed with outcomes of probabilistic seismic hazard analysis (PSHA) for a test site located in south Italy. Once the seismic failure probabilities have been quantified, consequence analysis has been performed for those events which may be triggered by the loss of containment following seismic action. Results are combined by means of a specific developed code in terms of local risk contour plots, i.e. the contour line for the probability of fatal injures at any point (x, y) in the analysed area. Finally, a comparison with QRA obtained by considering only process-related top events is reported for reference.

  15. Long-term health experience of jet engine manufacturing workers: V. Issues with the analysis of non-malignant central nervous system neoplasms.

    PubMed

    Buchanich, Jeanine M; Youk, Ada O; Marsh, Gary M; Kennedy, Kathleen J; Lacey, Steven E; Hancock, Roger P; Esmen, Nurtan A; Cunningham, Michael A; Leiberman, Frank S; Fleissner, Mary Lou

    2011-01-01

    We attempted to examine non-malignant central nervous system (CNS) neoplasms incidence rates for workers at 8 jet engine manufacturing facilities in Connecticut. The objective of this manuscript is to describe difficulties encountered regarding these analyses to aid future studies. We traced the cohort for incident cases of CNS neoplasms in states where 95% of deaths in the total cohort occurred. We used external and internal analyses in an attempt to obtain the true risk of non-malignant CNS in the cohort. Because these analyses were limited by data constraints, we conducted sensitivity analyses, including using state driver's license data to adjust person-year stop dates to help minimize underascertainment and more accurately determine cohort risk estimates. We identified 3 unanticipated challenges: case identification, determination of population-based cancer incidence rates, and handling of case underascertainment. These factors precluded an accurate assessment of non-malignant CNS neoplasm incidence risks in this occupational epidemiology study. The relatively recent (2004) mandate of capturing non-malignant CNS tumor data at the state level means that, in time, it may be possible to conduct external analyses of these data. Meanwhile, similar occupational epidemiology studies may be limited to descriptive analysis of the non-malignant CNS case characteristics.

  16. An economic analysis of adherence engineering to improve use of best practices during central line maintenance procedures.

    PubMed

    Nelson, Richard E; Angelovic, Aaron W; Nelson, Scott D; Gleed, Jeremy R; Drews, Frank A

    2015-05-01

    Adherence engineering applies human factors principles to examine non-adherence within a specific task and to guide the development of materials or equipment to increase protocol adherence and reduce human error. Central line maintenance (CLM) for intensive care unit (ICU) patients is a task through which error or non-adherence to protocols can cause central line-associated bloodstream infections (CLABSIs). We conducted an economic analysis of an adherence engineering CLM kit designed to improve the CLM task and reduce the risk of CLABSI. We constructed a Markov model to compare the cost-effectiveness of the CLM kit, which contains each of the 27 items necessary for performing the CLM procedure, compared with the standard care procedure for CLM, in which each item for dressing maintenance is gathered separately. We estimated the model using the cost of CLABSI overall ($45,685) as well as the excess LOS (6.9 excess ICU days, 3.5 excess general ward days). Assuming the CLM kit reduces the risk of CLABSI by 100% and 50%, this strategy was less costly (cost savings between $306 and $860) and more effective (between 0.05 and 0.13 more quality-adjusted life-years) compared with not using the pre-packaged kit. We identified threshold values for the effectiveness of the kit in reducing CLABSI for which the kit strategy was no longer less costly. An adherence engineering-based intervention to streamline the CLM process can improve patient outcomes and lower costs. Patient safety can be improved by adopting new approaches that are based on human factors principles.

  17. Tacit Knowledge Capture and the Brain-Drain at Electrical Utilities

    NASA Astrophysics Data System (ADS)

    Perjanik, Nicholas Steven

    As a consequence of an aging workforce, electric utilities are at risk of losing their most experienced and knowledgeable electrical engineers. In this research, the problem was a lack of understanding of what electric utilities were doing to capture the tacit knowledge or know-how of these engineers. The purpose of this qualitative research study was to explore the tacit knowledge capture strategies currently used in the industry by conducting a case study of 7 U.S. electrical utilities that have demonstrated an industry commitment to improving operational standards. The research question addressed the implemented strategies to capture the tacit knowledge of retiring electrical engineers and technical personnel. The research methodology involved a qualitative embedded case study. The theories used in this study included knowledge creation theory, resource-based theory, and organizational learning theory. Data were collected through one time interviews of a senior electrical engineer or technician within each utility and a workforce planning or training professional within 2 of the 7 utilities. The analysis included the use of triangulation and content analysis strategies. Ten tacit knowledge capture strategies were identified: (a) formal and informal on-boarding mentorship and apprenticeship programs, (b) formal and informal off-boarding mentorship programs, (c) formal and informal training programs, (d) using lessons learned during training sessions, (e) communities of practice, (f) technology enabled tools, (g) storytelling, (h) exit interviews, (i) rehiring of retirees as consultants, and (j) knowledge risk assessments. This research contributes to social change by offering strategies to capture the know-how needed to ensure operational continuity in the delivery of safe, reliable, and sustainable power.

  18. Pressurization, Pneumatic, and Vent Subsystems of the X-34 Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Steadman, T. E.; Brown, T. M.; Knight, K. C.; White, C. E., Jr.; Champion, R. H., Jr.

    1998-01-01

    In pressurization systems, regulators and orifices are use to control the flow of the pressurant. For the X-34 Main Propulsion System, three pressurization subsystem design configuration options were considered. In the first option, regulators were used while in the other options, orifices were considered. In each design option, the vent/relief system must be capable of relieving the pressurant flow without allowing the tank pressure to rise above proof, therefore, impacts on the propellant tank vent system were investigated and a trade study of the pressurization system was conducted. The analysis indicated that design option using regulators poses least risk. Then, a detailed transient thermal/fluid analysis of the recommended pressurization system was performed. Helium usage, thermodynamic conditions, and overpressurization of each propellant tank were evaluated. The pneumatic and purge subsystem is used for pneumatic valve actuation, Inter-Propellant Seal purges, Engine Spin Start, and engine purges at the required interface pressures, A transient analysis of the pneumatic and purge subsystem provided helium usage and flow rates to Inter-Propellant Seal and engine interfaces. Fill analysis of the helium bottles of pressurization and pneumatic subsystems during ground operation was performed. The required fill time and the stored

  19. The ecological risks of genetically engineered organisms

    NASA Astrophysics Data System (ADS)

    Wolfenbarger, Lareesa

    2001-03-01

    Highly publicized studies have suggested environmental risks of releasing genetically engineered organisms (GEOs) and have renewed concerns over the evaluation and regulation of these products in domestic and international arenas. I present an overview of the risks of GEOs and the available evidence addressing these and discuss the challenges for risk assessment. Main categories of risk include non-target effects from GEOs, emergence of new viral diseases, and the spread of invasive (weedy) characteristics. Studies have detected non-target effects in some cases but not all; however, much less information exists on other risks, in part due to a lack of conceptual knowledge. For example, general models for predicting invasiveness are not well developed for any introduced organism. The risks of GEOs appear comparable to those for any introduced species or organism, but the magnitude of the risk or the pathway of exposure to the risk can differ among introduced organisms. Therefore, assessing the risks requires a case-by-case analysis so that any differences can be identified. Challenges to assessing risks to valued ecosystems include variability in effects and ecosystem complexity. Ecosystems are a dynamic and complex network of biological and physical interactions. Introducing a new biological entity, such as a GEO, may potentially alter any of these interactions, but evaluating all of these is unrealistic. Effects on a valued ecosystem could vary greatly depending on the geographical location of the experimental site, the GEO used, the plot size of the experiment (scaling effects), and the biological and physical parameters used in the experiment. Experiments that address these sources of variability will provide the most useful information for risk assessments.

  20. Risk Management Implementation Tool

    NASA Technical Reports Server (NTRS)

    Wright, Shayla L.

    2004-01-01

    Continuous Risk Management (CM) is a software engineering practice with processes, methods, and tools for managing risk in a project. It provides a controlled environment for practical decision making, in order to assess continually what could go wrong, determine which risk are important to deal with, implement strategies to deal with those risk and assure the measure effectiveness of the implemented strategies. Continuous Risk Management provides many training workshops and courses to teach the staff how to implement risk management to their various experiments and projects. The steps of the CRM process are identification, analysis, planning, tracking, and control. These steps and the various methods and tools that go along with them, identification, and dealing with risk is clear-cut. The office that I worked in was the Risk Management Office (RMO). The RMO at NASA works hard to uphold NASA s mission of exploration and advancement of scientific knowledge and technology by defining and reducing program risk. The RMO is one of the divisions that fall under the Safety and Assurance Directorate (SAAD). I worked under Cynthia Calhoun, Flight Software Systems Engineer. My task was to develop a help screen for the Continuous Risk Management Implementation Tool (RMIT). The Risk Management Implementation Tool will be used by many NASA managers to identify, analyze, track, control, and communicate risks in their programs and projects. The RMIT will provide a means for NASA to continuously assess risks. The goals and purposes for this tool is to provide a simple means to manage risks, be used by program and project managers throughout NASA for managing risk, and to take an aggressive approach to advertise and advocate the use of RMIT at each NASA center.

  1. Risk-Based Probabilistic Approach to Aeropropulsion System Assessment

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.

    2002-01-01

    In an era of shrinking development budgets and resources, where there is also an emphasis on reducing the product development cycle, the role of system assessment, performed in the early stages of an engine development program, becomes very critical to the successful development of new aeropropulsion systems. A reliable system assessment not only helps to identify the best propulsion system concept among several candidates, it can also identify which technologies are worth pursuing. This is particularly important for advanced aeropropulsion technology development programs, which require an enormous amount of resources. In the current practice of deterministic, or point-design, approaches, the uncertainties of design variables are either unaccounted for or accounted for by safety factors. This could often result in an assessment with unknown and unquantifiable reliability. Consequently, it would fail to provide additional insight into the risks associated with the new technologies, which are often needed by decision makers to determine the feasibility and return-on-investment of a new aircraft engine. In this work, an alternative approach based on the probabilistic method was described for a comprehensive assessment of an aeropropulsion system. The statistical approach quantifies the design uncertainties inherent in a new aeropropulsion system and their influences on engine performance. Because of this, it enhances the reliability of a system assessment. A technical assessment of a wave-rotor-enhanced gas turbine engine was performed to demonstrate the methodology. The assessment used probability distributions to account for the uncertainties that occur in component efficiencies and flows and in mechanical design variables. The approach taken in this effort was to integrate the thermodynamic cycle analysis embedded in the computer code NEPP (NASA Engine Performance Program) and the engine weight analysis embedded in the computer code WATE (Weight Analysis of Turbine Engines) with the fast probability integration technique (FPI). FPI was developed by Southwest Research Institute under contract with the NASA Glenn Research Center. The results were plotted in the form of cumulative distribution functions and sensitivity analyses and were compared with results from the traditional deterministic approach. The comparison showed that the probabilistic approach provides a more realistic and systematic way to assess an aeropropulsion system. The current work addressed the application of the probabilistic approach to assess specific fuel consumption, engine thrust, and weight. Similarly, the approach can be used to assess other aspects of aeropropulsion system performance, such as cost, acoustic noise, and emissions. Additional information is included in the original extended abstract.

  2. Causes and risk factors for fatal accidents in non-commercial twin engine piston general aviation aircraft.

    PubMed

    Boyd, Douglas D

    2015-04-01

    Accidents in twin-engine aircraft carry a higher risk of fatality compared with single engine aircraft and constitute 9% of all general aviation accidents. The different flight profile (higher airspeed, service ceiling, increased fuel load, and aircraft yaw in engine failure) may make comparable studies on single-engine aircraft accident causes less relevant. The objective of this study was to identify the accident causes for non-commercial operations in twin engine aircraft. A NTSB accident database query for accidents in twin piston engine airplanes of 4-8 seat capacity with a maximum certified weight of 3000-8000lbs. operating under 14CFR Part 91 for the period spanning 2002 and 2012 returned 376 accidents. Accident causes and contributing factors were as per the NTSB final report categories. Total annual flight hour data for the twin engine piston aircraft fleet were obtained from the FAA. Statistical analyses employed Chi Square, Fisher's Exact and logistic regression analysis. Neither the combined fatal/non-fatal accident nor the fatal accident rate declined over the period spanning 2002-2012. Under visual weather conditions, the largest number, n=27, (27%) of fatal accidents was attributed to malfunction with a failure to follow single engine procedures representing the most common contributing factor. In degraded visibility, poor instrument approach procedures resulted in the greatest proportion of fatal crashes. Encountering thunderstorms was the most lethal of all accident causes with all occupants sustaining fatal injuries. At night, a failure to maintain obstacle/terrain clearance was the most common accident cause leading to 36% of fatal crashes. The results of logistic regression showed that operations at night (OR 3.7), off airport landings (OR 14.8) and post-impact fire (OR 7.2) all carried an excess risk of a fatal flight. This study indicates training areas that should receive increased emphasis for twin-engine training/recency. First, increased training should be provided on single engine procedures in the event of an engine failure. Second, more focus should be placed on instrument approaches and recovery from unusual aircraft attitude where visibility is degraded. Third, pilots should be made aware of appropriate speed selection for inadvertent flights in convective weather. Finally, emphasizing the importance of conducting night operations under instrument flight rules with its altitude restrictions should lead to a diminished proportion of accidents attributed to failure to maintain obstacle/terrain clearance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Different Approaches for Ensuring Performance/Reliability of Plastic Encapsulated Microcircuits (PEMs) in Space Applications

    NASA Technical Reports Server (NTRS)

    Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.

    2000-01-01

    Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.

  4. Astronaut Risk Levels During Crew Module (CM) Land Landing

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Carney, Kelly S.; Littell, Justin

    2007-01-01

    The NASA Engineering Safety Center (NESC) is investigating the merits of water and land landings for the crew exploration vehicle (CEV). The merits of these two options are being studied in terms of cost and risk to the astronauts, vehicle, support personnel, and general public. The objective of the present work is to determine the astronaut dynamic response index (DRI), which measures injury risks. Risks are determined for a range of vertical and horizontal landing velocities. A structural model of the crew module (CM) is developed and computational simulations are performed using a transient dynamic simulation analysis code (LS-DYNA) to determine acceleration profiles. Landing acceleration profiles are input in a human factors model that determines astronaut risk levels. Details of the modeling approach, the resulting accelerations, and astronaut risk levels are provided.

  5. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell

    2016-11-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility.

  6. Advanced uncertainty modelling for container port risk analysis.

    PubMed

    Alyami, Hani; Yang, Zaili; Riahi, Ramin; Bonsall, Stephen; Wang, Jin

    2016-08-13

    Globalization has led to a rapid increase of container movements in seaports. Risks in seaports need to be appropriately addressed to ensure economic wealth, operational efficiency, and personnel safety. As a result, the safety performance of a Container Terminal Operational System (CTOS) plays a growing role in improving the efficiency of international trade. This paper proposes a novel method to facilitate the application of Failure Mode and Effects Analysis (FMEA) in assessing the safety performance of CTOS. The new approach is developed through incorporating a Fuzzy Rule-Based Bayesian Network (FRBN) with Evidential Reasoning (ER) in a complementary manner. The former provides a realistic and flexible method to describe input failure information for risk estimates of individual hazardous events (HEs) at the bottom level of a risk analysis hierarchy. The latter is used to aggregate HEs safety estimates collectively, allowing dynamic risk-based decision support in CTOS from a systematic perspective. The novel feature of the proposed method, compared to those in traditional port risk analysis lies in a dynamic model capable of dealing with continually changing operational conditions in ports. More importantly, a new sensitivity analysis method is developed and carried out to rank the HEs by taking into account their specific risk estimations (locally) and their Risk Influence (RI) to a port's safety system (globally). Due to its generality, the new approach can be tailored for a wide range of applications in different safety and reliability engineering and management systems, particularly when real time risk ranking is required to measure, predict, and improve the associated system safety performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul

    2013-12-01

    The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.

  8. Frequency and associated risk factors for neck pain among software engineers in Karachi, Pakistan.

    PubMed

    Rasim Ul Hasanat, Mohammad; Ali, Syed Shahzad; Rasheed, Abdur; Khan, Muhammad

    2017-07-01

    To determine the frequency of neck pain and its association with risk factors among software engineers. This descriptive, cross-sectional study was conducted at the Dow University of Health Sciences, Karachi, from February to March 2016, and comprised software engineers from 19 different locations. Non-probability purposive sampling technique was used to select individuals spending at least 6 hours in front of computer screens every day and having a work experience of at least 6 months. Data were collected using a self-administrable questionnaire. SPSS 21 was used for data analysis. Of the 185 participants, 49(26.5%) had neck pain at the time of data-gathering, while 136(73.5%) reported no pain. However, 119(64.32%) participants had a previous history of neck pain. Other factors like smoking, physical inactivity, history of any muscular pain and neck pain, uncomfortable workstation, and work-related mental stress and insufficient sleep at night, were found to be significantly associated with current neck pain (p<0.05 each). Intensive computer users are likely to experience at least one episode of computer-associated neck pain.

  9. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  10. Risk communication strategy development using the aerospace systems engineering process

    NASA Technical Reports Server (NTRS)

    Dawson, S.; Sklar, M.

    2004-01-01

    This paper explains the goals and challenges of NASA's risk communication efforts and how the Aerospace Systems Engineering Process (ASEP) was used to map the risk communication strategy used at the Jet Propulsion Laboratory to achieve these goals.

  11. Rocket Engine Nozzle Side Load Transient Analysis Methodology: A Practical Approach

    NASA Technical Reports Server (NTRS)

    Shi, John J.

    2005-01-01

    At the sea level, a phenomenon common with all rocket engines, especially for a highly over-expanded nozzle, during ignition and shutdown is that of flow separation as the plume fills and empties the nozzle, Since the flow will be separated randomly. it will generate side loads, i.e. non-axial forces. Since rocket engines are designed to produce axial thrust to power the vehicles, it is not desirable to be excited by non-axial input forcing functions, In the past, several engine failures were attributed to side loads. During the development stage, in order to design/size the rocket engine components and to reduce the risks, the local dynamic environments as well as dynamic interface loads have to be defined. The methodology developed here is the way to determine the peak loads and shock environments for new engine components. In the past it is not feasible to predict the shock environments, e.g. shock response spectra, from one engine to the other, because it is not scaleable. Therefore, the problem has been resolved and the shock environments can be defined in the early stage of new engine development. Additional information is included in the original extended abstract.

  12. A theoretical treatment of technical risk in modern propulsion system design

    NASA Astrophysics Data System (ADS)

    Roth, Bryce Alexander

    2000-09-01

    A prevalent trend in modern aerospace systems is increasing complexity and cost, which in turn drives increased risk. Consequently, there is a clear and present need for the development of formalized methods to analyze the impact of risk on the design of aerospace vehicles. The objective of this work is to develop such a method that enables analysis of risk via a consistent, comprehensive treatment of aerothermodynamic and mass properties aspects of vehicle design. The key elements enabling the creation of this methodology are recent developments in the analytical estimation of work potential based on the second law of thermodynamics. This dissertation develops the theoretical foundation of a vehicle analysis method based on work potential and validates it using the Northrop F-5E with GE J85-GE-21 engines as a case study. Although the method is broadly applicable, emphasis is given to aircraft propulsion applications. Three work potential figures of merit are applied using this method: exergy, available energy, and thrust work potential. It is shown that each possesses unique properties making them useful for specific vehicle analysis tasks, though the latter two are actually special cases of exergy. All three are demonstrated on the analysis of the J85-GE-21 propulsion system, resulting in a comprehensive description of propulsion system thermodynamic loss. This "loss management" method is used to analyze aerodynamic drag loss of the F-5E and is then used in conjunction with the propulsive loss model to analyze the usage of fuel work potential throughout the F-5E design mission. The results clearly show how and where work potential is used during flight and yield considerable insight as to where the greatest opportunity for design improvement is. Next, usage of work potential is translated into fuel weight so that the aerothermodynamic performance of the F-5E can be expressed entirely in terms of vehicle gross weight. This technique is then applied as a means to quantify the impact of engine cycle technologies on the F-5E airframe. Finally, loss management methods are used in conjunction with probabilistic analysis methods to quantify the impact of risk on F-5E aerothermodynamic performance.

  13. Use of evidential reasoning and AHP to assess regional industrial safety

    PubMed Central

    Chen, Zhichao; Chen, Tao; Qu, Zhuohua; Ji, Xuewei; Zhou, Yi; Zhang, Hui

    2018-01-01

    China’s fast economic growth contributes to the rapid development of its urbanization process, and also renders a series of industrial accidents, which often cause loss of life, damage to property and environment, thus requiring the associated risk analysis and safety control measures to be implemented in advance. However, incompleteness of historical failure data before the occurrence of accidents makes it difficult to use traditional risk analysis approaches such as probabilistic risk analysis in many cases. This paper aims to develop a new methodology capable of assessing regional industrial safety (RIS) in an uncertain environment. A hierarchical structure for modelling the risks influencing RIS is first constructed. The hybrid of evidential reasoning (ER) and Analytical Hierarchy Process (AHP) is then used to assess the risks in a complementary way, in which AHP is hired to evaluate the weight of each risk factor and ER is employed to synthesise the safety evaluations of the investigated region(s) against the risk factors from the bottom to the top level in the hierarchy. The successful application of the hybrid approach in a real case analysis of RIS in several major districts of Beijing (capital of China) demonstrates its feasibility as well as provides risk analysts and safety engineers with useful insights on effective solutions to comprehensive risk assessment of RIS in metropolitan cities. The contribution of this paper is made by the findings on the comparison of risk levels of RIS at different regions against various risk factors so that best practices from the good performer(s) can be used to improve the safety of the others. PMID:29795593

  14. Use of evidential reasoning and AHP to assess regional industrial safety.

    PubMed

    Chen, Zhichao; Chen, Tao; Qu, Zhuohua; Yang, Zaili; Ji, Xuewei; Zhou, Yi; Zhang, Hui

    2018-01-01

    China's fast economic growth contributes to the rapid development of its urbanization process, and also renders a series of industrial accidents, which often cause loss of life, damage to property and environment, thus requiring the associated risk analysis and safety control measures to be implemented in advance. However, incompleteness of historical failure data before the occurrence of accidents makes it difficult to use traditional risk analysis approaches such as probabilistic risk analysis in many cases. This paper aims to develop a new methodology capable of assessing regional industrial safety (RIS) in an uncertain environment. A hierarchical structure for modelling the risks influencing RIS is first constructed. The hybrid of evidential reasoning (ER) and Analytical Hierarchy Process (AHP) is then used to assess the risks in a complementary way, in which AHP is hired to evaluate the weight of each risk factor and ER is employed to synthesise the safety evaluations of the investigated region(s) against the risk factors from the bottom to the top level in the hierarchy. The successful application of the hybrid approach in a real case analysis of RIS in several major districts of Beijing (capital of China) demonstrates its feasibility as well as provides risk analysts and safety engineers with useful insights on effective solutions to comprehensive risk assessment of RIS in metropolitan cities. The contribution of this paper is made by the findings on the comparison of risk levels of RIS at different regions against various risk factors so that best practices from the good performer(s) can be used to improve the safety of the others.

  15. Assuring quality in high-consequence engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, Marcey L.; Kolb, Rachel R.

    2014-03-01

    In high-consequence engineering organizations, such as Sandia, quality assurance may be heavily dependent on staff competency. Competency-dependent quality assurance models are at risk when the environment changes, as it has with increasing attrition rates, budget and schedule cuts, and competing program priorities. Risks in Sandia's competency-dependent culture can be mitigated through changes to hiring, training, and customer engagement approaches to manage people, partners, and products. Sandia's technical quality engineering organization has been able to mitigate corporate-level risks by driving changes that benefit all departments, and in doing so has assured Sandia's commitment to excellence in high-consequence engineering and national service.

  16. A Risk Management Architecture for Emergency Integrated Aircraft Control

    NASA Technical Reports Server (NTRS)

    McGlynn, Gregory E.; Litt, Jonathan S.; Lemon, Kimberly A.; Csank, Jeffrey T.

    2011-01-01

    Enhanced engine operation--operation that is beyond normal limits--has the potential to improve the adaptability and safety of aircraft in emergency situations. Intelligent use of enhanced engine operation to improve the handling qualities of the aircraft requires sophisticated risk estimation techniques and a risk management system that spans the flight and propulsion controllers. In this paper, an architecture that weighs the risks of the emergency and of possible engine performance enhancements to reduce overall risk to the aircraft is described. Two examples of emergency situations are presented to demonstrate the interaction between the flight and propulsion controllers to facilitate the enhanced operation.

  17. Incident-response monitoring technologies for aircraft cabin air quality

    NASA Astrophysics Data System (ADS)

    Magoha, Paul W.

    Poor air quality in commercial aircraft cabins can be caused by volatile organophosphorus (OP) compounds emitted from the jet engine bleed air system during smoke/fume incidents. Tri-cresyl phosphate (TCP), a common anti-wear additive in turbine engine oils, is an important component in today's global aircraft operations. However, exposure to TCP increases risks of certain adverse health effects. This research analyzed used aircraft cabin air filters for jet engine oil contaminants and designed a jet engine bleed air simulator (BAS) to replicate smoke/fume incidents caused by pyrolysis of jet engine oil. Field emission scanning electron microscopy (FESEM) with X-ray energy dispersive spectroscopy (EDS) and neutron activation analysis (NAA) were used for elemental analysis of filters, and gas chromatography interfaced with mass spectrometry (GC/MS) was used to analyze used filters to determine TCP isomers. The filter analysis study involved 110 used and 90 incident filters. Clean air filter samples exposed to different bleed air conditions simulating cabin air contamination incidents were also analyzed by FESEM/EDS, NAA, and GC/MS. Experiments were conducted on a BAS at various bleed air conditions typical of an operating jet engine so that the effects of temperature and pressure variations on jet engine oil aerosol formation could be determined. The GC/MS analysis of both used and incident filters characterized tri- m-cresyl phosphate (TmCP) and tri-p-cresyl phosphate (TpCP) by a base peak of an m/z = 368, with corresponding retention times of 21.9 and 23.4 minutes. The hydrocarbons in jet oil were characterized in the filters by a base peak pattern of an m/z = 85, 113. Using retention times and hydrocarbon thermal conductivity peak (TCP) pattern obtained from jet engine oil standards, five out of 110 used filters tested had oil markers. Meanwhile 22 out of 77 incident filters tested positive for oil fingerprints. Probit analysis of jet engine oil aerosols obtained from BAS tests by optical particle counter (OPC) revealed lognormal distributions with the mean (range) of geometric mass mean diameter (GMMD) = 0.41 (0.39, 0.45) microm and geometric standard deviation (GSD), sigma g = 1.92 (1.87, 1.98). FESEM/EDS and NAA techniques found a wide range of elements on filters, and further investigations of used filters are recommended using these techniques. The protocols for air and filter sampling and GC/MS analysis used in this study will increase the options available for detecting jet engine oil on cabin air filters. Such criteria could support policy development for compliance with cabin air quality standards during incidents.

  18. Examining Cybersecurity of Cyberphysical Systems for Critical Infrastructures Through Work Domain Analysis.

    PubMed

    Wang, Hao; Lau, Nathan; Gerdes, Ryan M

    2018-04-01

    The aim of this study was to apply work domain analysis for cybersecurity assessment and design of supervisory control and data acquisition (SCADA) systems. Adoption of information and communication technology in cyberphysical systems (CPSs) for critical infrastructures enables automated and distributed control but introduces cybersecurity risk. Many CPSs employ SCADA industrial control systems that have become the target of cyberattacks, which inflict physical damage without use of force. Given that absolute security is not feasible for complex systems, cyberintrusions that introduce unanticipated events will occur; a proper response will in turn require human adaptive ability. Therefore, analysis techniques that can support security assessment and human factors engineering are invaluable for defending CPSs. We conducted work domain analysis using the abstraction hierarchy (AH) to model a generic SCADA implementation to identify the functional structures and means-ends relations. We then adopted a case study approach examining the Stuxnet cyberattack by developing and integrating AHs for the uranium enrichment process, SCADA implementation, and malware to investigate the interactions between the three aspects of cybersecurity in CPSs. The AHs for modeling a generic SCADA implementation and studying the Stuxnet cyberattack are useful for mapping attack vectors, identifying deficiencies in security processes and features, and evaluating proposed security solutions with respect to system objectives. Work domain analysis is an effective analytical method for studying cybersecurity of CPSs for critical infrastructures in a psychologically relevant manner. Work domain analysis should be applied to assess cybersecurity risk and inform engineering and user interface design.

  19. Implementation of Enhanced Propulsion Control Modes for Emergency Flight Operation

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Chin, Jeffrey C.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei

    2011-01-01

    Aircraft engines can be effective actuators to help pilots avert or recover from emergency situations. Emergency control modes are being developed to enhance the engines performance to increase the probability of recovery under these circumstances. This paper discusses a proposed implementation of an architecture that requests emergency propulsion control modes, allowing the engines to deliver additional performance in emergency situations while still ensuring a specified safety level. In order to determine the appropriate level of engine performance enhancement, information regarding the current emergency scenario (including severity) and current engine health must be known. This enables the engine to operate beyond its nominal range while minimizing overall risk to the aircraft. In this architecture, the flight controller is responsible for determining the severity of the event and the level of engine risk that is acceptable, while the engine controller is responsible for delivering the desired performance within the specified risk range. A control mode selector specifies an appropriate situation-specific enhanced mode, which the engine controller then implements. The enhanced control modes described in this paper provide additional engine thrust or response capabilities through the modification of gains, limits, and the control algorithm, but increase the risk of engine failure. The modifications made to the engine controller to enable the use of the enhanced control modes are described, as are the interaction between the various subsystems and importantly, the interaction between the flight controller/pilot and the propulsion control system. Simulation results demonstrate how the system responds to requests for enhanced operation and the corresponding increase in performance.

  20. Purity and the dangers of regenerative medicine: regulatory innovation of human tissue-engineered technology.

    PubMed

    Faulkner, Alex; Kent, Julie; Geesink, Ingrid; FitzPatrick, David

    2006-11-01

    This paper examines the development of innovation in human tissue technologies as a form of regenerative medicine, firstly by applying 'pollution ideas' to contemporary trends in its risk regulation and to the processes of regulatory policy formation, and secondly by analysing the classificatory processes deployed in regulatory policy. The analysis draws upon data from fieldwork and documentary materials with a focus on the UK and EU (2002-05) and explores four arenas: governance and regulatory policy; commercialisation and the market; 'evidentiality' manifest in evidence-based policy; and publics' and technology users' values and ethics. The analysis suggests that there is a trend toward 'purification' across these arenas, both material and socio-political. A common process of partitioning is found in stakeholders' attempts to define a clear terrain, which the field of tissue-engineered technology might occupy. We conclude that pollution ideas and partitioning processes are useful in understanding regulatory ordering and innovation in the emerging technological zone of human tissue engineering.

  1. The 25 kWe solar thermal Stirling hydraulic engine system: Conceptual design

    NASA Technical Reports Server (NTRS)

    White, Maurice; Emigh, Grant; Noble, Jack; Riggle, Peter; Sorenson, Torvald

    1988-01-01

    The conceptual design and analysis of a solar thermal free-piston Stirling hydraulic engine system designed to deliver 25 kWe when coupled to a 11 meter test bed concentrator is documented. A manufacturing cost assessment for 10,000 units per year was made. The design meets all program objectives including a 60,000 hr design life, dynamic balancing, fully automated control, more than 33.3 percent overall system efficiency, properly conditioned power, maximum utilization of annualized insolation, and projected production costs. The system incorporates a simple, rugged, reliable pool boiler reflux heat pipe to transfer heat from the solar receiver to the Stirling engine. The free-piston engine produces high pressure hydraulic flow which powers a commercial hydraulic motor that, in turn, drives a commercial rotary induction generator. The Stirling hydraulic engine uses hermetic bellows seals to separate helium working gas from hydraulic fluid which provides hydrodynamic lubrication to all moving parts. Maximum utilization of highly refined, field proven commercial components for electric power generation minimizes development cost and risk.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.

    This presentation will examine process systems engineering R&D needs for application to advanced fossil energy (FE) systems and highlight ongoing research activities at the National Energy Technology Laboratory (NETL) under the auspices of a recently launched Collaboratory for Process & Dynamic Systems Research. The three current technology focus areas include: 1) High-fidelity systems with NETL's award-winning Advanced Process Engineering Co-Simulator (APECS) technology for integrating process simulation with computational fluid dynamics (CFD) and virtual engineering concepts, 2) Dynamic systems with R&D on plant-wide IGCC dynamic simulation, control, and real-time training applications, and 3) Systems optimization including large-scale process optimization, stochastic simulationmore » for risk/uncertainty analysis, and cost estimation. Continued R&D aimed at these and other key process systems engineering models, methods, and tools will accelerate the development of advanced gasification-based FE systems and produce increasingly valuable outcomes for DOE and the Nation.« less

  3. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  4. Life-Cycle Cost Analysis for Small Unmanned Aircraft Systems Deployed Aboard Coast Guard Cutters

    DTIC Science & Technology

    2013-12-09

    safest way to survey the NBC disaster site. “Swarms could scan high-risk buildings and sites (think Fukushima ^Åèìáëáíáçå=oÉëÉ~êÅÜ=mêçÖê~ã= dê~Çì~íÉ...in 1997 with a degree in mechanical engineering. After graduation, he served onboard USCGC Thetis (WMEC-910) as the student engineer and damage...the surrounding area with his family. After graduation in December 2013, he will report to the In-Service Vessel Sustainment Project as assistant

  5. Hyperbolic Rendezvous at Mars: Risk Assessments and Mitigation Strategies

    NASA Technical Reports Server (NTRS)

    Jedrey, Ricky; Landau, Damon; Whitley, Ryan

    2015-01-01

    Given the current interest in the use of flyby trajectories for human Mars exploration, a key requirement is the capability to execute hyperbolic rendezvous. Hyperbolic rendezvous is used to transport crew from a Mars centered orbit, to a transiting Earth bound habitat that does a flyby. Representative cases are taken from future potential missions of this type, and a thorough sensitivity analysis of the hyperbolic rendezvous phase is performed. This includes early engine cutoff, missed burn times, and burn misalignment. A finite burn engine model is applied that assumes the hyperbolic rendezvous phase is done with at least two burns.

  6. Managing Analysis Models in the Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  7. Risk watershed analysis: a new approach to manage torrent control structures

    NASA Astrophysics Data System (ADS)

    Quefféléan, Yann; Carladous, Simon; Deymier, Christian; Marco, Olivier

    2017-04-01

    Torrential check dams have been built in French public forests since the 19th century, applying the Restoration and conservation of Mountainous Areas (RTM) laws (1860, 1864, 1882). The RTM department of the National Forestry Office (ONF) helps the government to decide on protective actions to implement within these areas. While more than 100 000 structures were registered in 1964, more than 14 000 check dams are currently registered and maintained within approximatively 380 000 ha of RTM public forests. The RTM department officers thus have a long experience in using check dams for soil restoration, but also in implementing other kinds of torrential protective structures such as sediment traps, embankments, bank protection, and so forth. As a part of the ONF, they are also experienced in forestry engineering. Nevertheless, some limits in torrent control management have been highlighted: - as existing protective structures are ageing, their effectiveness to protect elements at risk must be assessed but it is a difficult task ; - as available budget for maintenance is continuously decreasing, priorities have to be made but decisions are difficult : what are the existing check dams functions? what is their expected effect on torrential hazard? is maintenance cost too important given this expected effect to protect elements at risk? Given these questions, a new policy has been engaged by the RTM department since 2012. A technical overview at the torrential watershed scale is now needed to help better maintenance decisions: it has been called a Risk Watershed Analysis (Etude de Bassin de Risque in French, EBR) and is funded by the government. Its objectives are to: - recall initial objectives of protective structures : therefore, a detailed archive analysis is made ; - describe current elements at risk to protect ; - describe natural hazards at the torrential watershed scale and their evolution since protective structures implementation ; - describe civil engineering and forestry works that have been implemented within the watershed, including their cost ; - decide on current protective works to implement (maintenance and new investment). For each EBR, a multidisciplinary team is involved with specialists in geomorphology, hydrology, hydraulics, geology, civil engineering and forestry. Approximatively 1 100 EBRs should be implemented at the national scale, including other natural phenomena such as snow avalanches and rock falls. Since 2012, approximatively 10 % have been realized in areas with the most significant elements at risk. From a practical point of view, these studies have helped a better understanding of torrential watershed conditions and of torrent control expected effect over years. An analysis of these studies will be performed soon to have a first overview of torrent control effect. We claim that these EBRs could be a significant source of information to help a comprehensive evaluation of long-term effectiveness of torrent control.

  8. PREFACE: International Conference on Applied Sciences 2015 (ICAS2015)

    NASA Astrophysics Data System (ADS)

    Lemle, Ludovic Dan; Jiang, Yiwen

    2016-02-01

    The International Conference on Applied Sciences ICAS2015 took place in Wuhan, China on June 3-5, 2015 at the Military Economics Academy of Wuhan. The conference is regularly organized, alternatively in Romania and in P.R. China, by Politehnica University of Timişoara, Romania, and Military Economics Academy of Wuhan, P.R. China, with the joint aims to serve as a platform for exchange of information between various areas of applied sciences, and to promote the communication between the scientists of different nations, countries and continents. The topics of the conference cover a comprehensive spectrum of issues from: >Economical Sciences and Defense: Management Sciences, Business Management, Financial Management, Logistics, Human Resources, Crisis Management, Risk Management, Quality Control, Analysis and Prediction, Government Expenditure, Computational Methods in Economics, Military Sciences, National Security, and others... >Fundamental Sciences and Engineering: Interdisciplinary applications of physics, Numerical approximation and analysis, Computational Methods in Engineering, Metallic Materials, Composite Materials, Metal Alloys, Metallurgy, Heat Transfer, Mechanical Engineering, Mechatronics, Reliability, Electrical Engineering, Circuits and Systems, Signal Processing, Software Engineering, Data Bases, Modeling and Simulation, and others... The conference gathered qualified researchers whose expertise can be used to develop new engineering knowledge that has applicability potential in Engineering, Economics, Defense, etc. The number of participants was 120 from 11 countries (China, Romania, Taiwan, Korea, Denmark, France, Italy, Spain, USA, Jamaica, and Bosnia and Herzegovina). During the three days of the conference four invited and 67 oral talks were delivered. Based on the work presented at the conference, 38 selected papers have been included in this volume of IOP Conference Series: Materials Science and Engineering. These papers present new research in the various fields of Materials Engineering, Mechanical Engineering, Computers Engineering, and Electrical Engineering. It's our great pleasure to present this volume of IOP Conference Series: Materials Science and Engineering to the scientific community to promote further research in these areas. We sincerely hope that the papers published in this volume will contribute to the advancement of knowledge in the respective fields.

  9. Recommendations for the design of laboratory studies on non-target arthropods for risk assessment of genetically engineered plants

    PubMed Central

    Hellmich, Richard L.; Candolfi, Marco P.; Carstens, Keri; De Schrijver, Adinda; Gatehouse, Angharad M. R.; Herman, Rod A.; Huesing, Joseph E.; McLean, Morven A.; Raybould, Alan; Shelton, Anthony M.; Waggoner, Annabel

    2010-01-01

    This paper provides recommendations on experimental design for early-tier laboratory studies used in risk assessments to evaluate potential adverse impacts of arthropod-resistant genetically engineered (GE) plants on non-target arthropods (NTAs). While we rely heavily on the currently used proteins from Bacillus thuringiensis (Bt) in this discussion, the concepts apply to other arthropod-active proteins. A risk may exist if the newly acquired trait of the GE plant has adverse effects on NTAs when they are exposed to the arthropod-active protein. Typically, the risk assessment follows a tiered approach that starts with laboratory studies under worst-case exposure conditions; such studies have a high ability to detect adverse effects on non-target species. Clear guidance on how such data are produced in laboratory studies assists the product developers and risk assessors. The studies should be reproducible and test clearly defined risk hypotheses. These properties contribute to the robustness of, and confidence in, environmental risk assessments for GE plants. Data from NTA studies, collected during the analysis phase of an environmental risk assessment, are critical to the outcome of the assessment and ultimately the decision taken by regulatory authorities on the release of a GE plant. Confidence in the results of early-tier laboratory studies is a precondition for the acceptance of data across regulatory jurisdictions and should encourage agencies to share useful information and thus avoid redundant testing. PMID:20938806

  10. Can a Boxer Engine Reduce Leg Injuries Among Motorcyclists? Analysis of Injury Distributions in Crashes Involving Different Motorcycles Fitted with Antilock Brakes (ABS).

    PubMed

    Rizzi, Matteo

    2015-01-01

    Several studies have shown that motorcycle antilock braking systems (ABS) reduce crashes and injuries. However, it has been suggested that the improved stability provided by ABS would make upright crashes more frequent, thus changing the injury distributions among motorcyclists and increasing the risk of leg injuries. The overall motorcycle design can vary across different categories and manufacturers. For instance, some motorcycles are equipped with boxer-twin engines; that is, with protruding cylinder heads. A previous study based on a limited material has suggested that these could provide some leg protection; therefore, the aim of this research was to analyze injury distributions in crashes involving ABS-equipped motorcycles with boxer-twin engines compared to similar ABS-equipped motorcycles with other engine configurations. Swedish hospital and police records from 2003-2014 were used. Crashes involving ABS-equipped motorcycles with boxer-twin engines (n = 55) were compared with similar ABS-equipped motorcycles with other engines configurations (n = 127). The distributions of Abbreviated Injury Scale (AIS) 1+ and AIS 2+ were compared. Each subject's injury scores were also converted to the risk for permanent medical impairment (RPMI), which shows the risk of different levels of permanent medical impairment given the severity and location and of injuries. To compare injury severity, the mean RPMI 1+ and RPMI 10+ were analyzed for each body region and in overall for each group of motorcyclists. It was found that AIS 1+, AIS 2+, and PMI 1+ leg injuries were reduced by approximately 50% among riders with boxer engines. These results were statistically significant. The number of injuries to the upper body did not increase; the mean RPMI to the head and upper body were similar across the 2 groups, suggesting that the severity of injuries did not increase either. Indications were found suggesting that the overall mean RPMI 1+ was lower among riders with boxer engines, although this result was not statistically significant. The mean values of the overall RPMI 10+ were similar. Boxer-twin engines were not originally developed to improve motorcycle crashworthiness. However, the present article indicates that these engines can reduce leg injuries among riders of motorcycles fitted with ABS. Though it is recommended that future research should look deeper into this particular aspect, the present findings suggest that the concept of integrated leg protection is indeed feasible and that further engineering efforts in this area are likely to yield significant savings in health losses among motorcyclists.

  11. Earth Sciences Data and Information System (ESDIS) program planning and evaluation methodology development

    NASA Technical Reports Server (NTRS)

    Dickinson, William B.

    1995-01-01

    An Earth Sciences Data and Information System (ESDIS) Project Management Plan (PMP) is prepared. An ESDIS Project Systems Engineering Management Plan (SEMP) consistent with the developed PMP is also prepared. ESDIS and related EOS program requirements developments, management and analysis processes are evaluated. Opportunities to improve the effectiveness of these processes and program/project responsiveness to requirements are identified. Overall ESDIS cost estimation processes are evaluated, and recommendations to improve cost estimating and modeling techniques are developed. ESDIS schedules and scheduling tools are evaluated. Risk assessment, risk mitigation strategies and approaches, and use of risk information in management decision-making are addressed.

  12. Smokeless tobacco use among operating engineers.

    PubMed

    Noonan, Devon; Duffy, Sonia A

    2012-05-01

    Workers in blue collar occupations have been shown to have higher rates of smokeless tobacco (ST) use compared to other occupational groups. Guided by the Health Promotion Model, the purpose of this study was to understand various factors that predict ST use in Operating Engineers. A cross-sectional design was used to determine variables related to ST use among Operating Engineers. Engineers (N = 498) were recruited during their 3-day apprentice certification course to participate in the study. Logistic regression was used to assess the associations between personal, psychological and behavioral characteristics associated with ST use. Past month ST use was reported among 13% of operating engineers surveyed. Multivariate analysis showed that younger age and lower rates of past month cigarette use were significantly associated with ST use, while higher rates of problem drinking were marginally associated with ST use. Operating Engineers are at high risk for using ST products with rates in this sample well over the national average. Work site interventions, which have shown promise in other studies, may be useful in decreasing ST use among this population.

  13. Orbit transfer vehicle engine study, phase A extension. Volume 2A: Study results

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Engine trade studies and systems analyses leading to a baseline engine selection for advanced expander cycle engine are discussed with emphasis on: (1) performance optimization of advanced expander cycle engines in the 10 to 20K pound thrust range; (2) selection of a recommended advanced expander engine configuration based on maximized performance and minimized mission risk, and definition of the components for this configuration; (3) characterization of the low thrust adaptation requirements and performance for the staged combustion engine; (4) generation of a suggested safety and reliability approach for OTV engines independent of engine cycle; (5) definition of program risk relationships between expander and staged combustion cycle engines; and (6) development of schedules and costs for the DDT&E, production, and operation phases of the 10K pound thrust expander engine program.

  14. Using Enterprise Architecture for Analysis of a Complex Adaptive Organization's Risk Inducing Characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salguero, Laura Marie; Huff, Johnathon; Matta, Anthony R.

    Sandia National Laboratories is an organization with a wide range of research and development activities that include nuclear, explosives, and chemical hazards. In addition, Sandia has over 2000 labs and over 40 major test facilities, such as the Thermal Test Complex, the Lightning Test Facility, and the Rocket Sled Track. In order to support safe operations, Sandia has a diverse Environment, Safety, and Health (ES&H) organization that provides expertise to support engineers and scientists in performing work safely. With such a diverse organization to support, the ES&H program continuously seeks opportunities to improve the services provided for Sandia by usingmore » various methods as part of their risk management strategy. One of the methods being investigated is using enterprise architecture analysis to mitigate risk inducing characteristics such as normalization of deviance, organizational drift, and problems in information flow. This paper is a case study for how a Department of Defense Architecture Framework (DoDAF) model of the ES&H enterprise, including information technology applications, can be analyzed to understand the level of risk associated with the risk inducing characteristics discussed above. While the analysis is not complete, we provide proposed analysis methods that will be used for future research as the project progresses.« less

  15. Derailment-based Fault Tree Analysis on Risk Management of Railway Turnout Systems

    NASA Astrophysics Data System (ADS)

    Dindar, Serdar; Kaewunruen, Sakdirat; An, Min; Gigante-Barrera, Ángel

    2017-10-01

    Railway turnouts are fundamental mechanical infrastructures, which allow a rolling stock to divert one direction to another. As those are of a large number of engineering subsystems, e.g. track, signalling, earthworks, these particular sub-systems are expected to induce high potential through various kind of failure mechanisms. This could be a cause of any catastrophic event. A derailment, one of undesirable events in railway operation, often results, albeit rare occurs, in damaging to rolling stock, railway infrastructure and disrupt service, and has the potential to cause casualties and even loss of lives. As a result, it is quite significant that a well-designed risk analysis is performed to create awareness of hazards and to identify what parts of the systems may be at risk. This study will focus on all types of environment based failures as a result of numerous contributing factors noted officially as accident reports. This risk analysis is designed to help industry to minimise the occurrence of accidents at railway turnouts. The methodology of the study relies on accurate assessment of derailment likelihood, and is based on statistical multiple factors-integrated accident rate analysis. The study is prepared in the way of establishing product risks and faults, and showing the impact of potential process by Boolean algebra.

  16. WEC Design Response Toolbox v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Ryan; Michelen, Carlos; Eckert-Gallup, Aubrey

    2016-03-30

    The WEC Design Response Toolbox (WDRT) is a numerical toolbox for design-response analysis of wave energy converters (WECs). The WDRT was developed during a series of efforts to better understand WEC survival design. The WDRT has been designed as a tool for researchers and developers, enabling the straightforward application of statistical and engineering methods. The toolbox includes methods for short-term extreme response, environmental characterization, long-term extreme response and risk analysis, fatigue, and design wave composition.

  17. Cardiovascular risk assessment in type 2 diabetes mellitus: comparison of the World Health Organization/International Society of Hypertension risk prediction charts versus UK Prospective Diabetes Study risk engine.

    PubMed

    Herath, Herath M Meththananda; Weerarathna, Thilak Priyantha; Umesha, Dilini

    2015-01-01

    Patients with type 2 diabetes mellitus (T2DM) are at higher risk of developing cardiovascular diseases, and assessment of their cardiac risk is important for preventive strategies. The Ministry of Health of Sri Lanka has recommended World Health Organization/International Society of Hypertension (WHO/ISH) charts for cardiac risk assessment in individuals with T2DM. However, the most suitable cardiac risk assessment tool for Sri Lankans with T2DM has not been studied. This study was designed to evaluate the performance of two cardiac risk assessments tools; WHO/ISH charts and UK Prospective Diabetes Study (UKPDS) risk engine. Cardiac risk assessments were done in 2,432 patients with T2DM attending a diabetes clinic in Southern Sri Lanka using the two risk assessment tools. Validity of two assessment tools was further assessed by their ability to recognize individuals with raised low-density lipoprotein (LDL) and raised diastolic blood pressure in a cohort of newly diagnosed T2DM patients (n=332). WHO/ISH charts identified 78.4% of subjects as low cardiac risk whereas the UKPDS risk engine categorized 52.3% as low cardiac risk (P<0.001). In the risk categories of 10%-<20%, the UKPDS risk engine identified higher proportions of patients (28%) compared to WHO/ISH charts (7%). Approximately 6% of subjects were classified as low cardiac risk (<10%) by WHO/ISH when UKPDS recognized them as cardiac risk of >20%. Agreement between the two tools was poor (κ value =0.144, P<0.01). Approximately 82% of individuals categorized as low cardiac risk by WHO/ISH had higher LDL cholesterol than the therapeutic target of 100 mg/dL. There is a significant discrepancy between the two assessment tools with WHO/ISH risk chart recognizing higher proportions of patients having low cardiac risk than the UKPDS risk engine. Risk assessment by both assessment tools demonstrated poor sensitivity in identifying those with treatable levels of LDL cholesterol and diastolic blood pressure.

  18. Impact of UKPDS risk estimation added to a first subjective risk estimation on management of coronary disease risk in type 2 diabetes - An observational study.

    PubMed

    Wind, Anne E; Gorter, Kees J; van den Donk, Maureen; Rutten, Guy E H M

    2016-02-01

    To investigate the impact of the UKPDS risk engine on management of CHD risk in T2DM patients. Observational study among 139 GPs. Data from 933 consecutive patients treated with a maximum of two oral glucose lowering drugs, collected at baseline and after twelve months. GPs estimated the CHD risk themselves and afterwards they calculated this with the UKPDS risk engine. Under- and overestimation were defined as a difference >5 percentage points difference between both calculations. The impact of the UKPDS risk engine was assessed by measuring differences in medication adjustments between the over-, under- and accurately estimated group. In 42.0% the GP accurately estimated the CHD risk, in 32.4% the risk was underestimated and in 25.6% overestimated. Mean difference between the estimated (18.7%) and calculated (19.1%) 10 years CHD risk was -0.36% (95% CI -1.24 to 0.52). Male gender, current smoking and total cholesterol level were associated with underestimation. Patients with an subjectively underestimated CHD risk received significantly more medication adjustments. Their UKPDS 10 year CHD risk did not increase during the follow-up period, contrary to the other two groups of patients. The UKPDS risk engine may be of added value for risk management in T2DM. Copyright © 2015 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  19. Failure Modes and Effects Analysis (FMEA): A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Failure modes and effects analysis (FMEA) is a bottom-up analytical process that identifies process hazards, which helps managers understand vulnerabilities of systems, as well as assess and mitigate risk. It is one of several engineering tools and techniques available to program and project managers aimed at increasing the likelihood of safe and successful NASA programs and missions. This bibliography references 465 documents in the NASA STI Database that contain the major concepts, failure modes or failure analysis, in either the basic index of the major subject terms.

  20. Framework for Comparative Risk Analysis of Dredged Material Disposal Options.

    DTIC Science & Technology

    1986-10-01

    TC3898-62 DACU67-85-D-8829 UNCLASSIFIED F/G 24/3 NL 125 ൖ ൘ ilil;1III -I uPSDDAR UTReports m ~ Puget Sound Dredged DipslAnalysis e~ od Washington State...I rB T T for Puget Sound Dredged Disposal Analysis c/o U.S. Army Corps of Engineers Seattle District 1 A" October, 1986 l-jq .__ .. _ Tetra Tech, Inc...priority pollutants C-2 E-1 Hypothetical example of total or bulk contaminant concentrations in four Puget Sound sediments E-1 E-2 ’Hypothetical example

  1. A Dynamic Model for the Evaluation of Aircraft Engine Icing Detection and Control-Based Mitigation Strategies

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.; Jones, Scott M.

    2017-01-01

    Aircraft flying in regions of high ice crystal concentrations are susceptible to the buildup of ice within the compression system of their gas turbine engines. This ice buildup can restrict engine airflow and cause an uncommanded loss of thrust, also known as engine rollback, which poses a potential safety hazard. The aviation community is conducting research to understand this phenomena, and to identify avoidance and mitigation strategies to address the concern. To support this research, a dynamic turbofan engine model has been created to enable the development and evaluation of engine icing detection and control-based mitigation strategies. This model captures the dynamic engine response due to high ice water ingestion and the buildup of ice blockage in the engines low pressure compressor. It includes a fuel control system allowing engine closed-loop control effects during engine icing events to be emulated. The model also includes bleed air valve and horsepower extraction actuators that, when modulated, change overall engine operating performance. This system-level model has been developed and compared against test data acquired from an aircraft turbofan engine undergoing engine icing studies in an altitude test facility and also against outputs from the manufacturers customer deck. This paper will describe the model and show results of its dynamic response under open-loop and closed-loop control operating scenarios in the presence of ice blockage buildup compared against engine test cell data. Planned follow-on use of the model for the development and evaluation of icing detection and control-based mitigation strategies will also be discussed. The intent is to combine the model and control mitigation logic with an engine icing risk calculation tool capable of predicting the risk of engine icing based on current operating conditions. Upon detection of an operating region of risk for engine icing events, the control mitigation logic will seek to change the engines operating point to a region of lower risk through the modulation of available control actuators while maintaining the desired engine thrust output. Follow-on work will assess the feasibility and effectiveness of such control-based mitigation strategies.

  2. Making the Hubble Space Telescope servicing mission safe

    NASA Technical Reports Server (NTRS)

    Bahr, N. J.; Depalo, S. V.

    1992-01-01

    The implementation of the HST system safety program is detailed. Numerous safety analyses are conducted through various phases of design, test, and fabrication, and results are presented to NASA management for discussion during dedicated safety reviews. Attention is given to the system safety assessment and risk analysis methodologies used, i.e., hazard analysis, fault tree analysis, and failure modes and effects analysis, and to how they are coupled with engineering and test analysis for a 'synergistic picture' of the system. Some preliminary safety analysis results, showing the relationship between hazard identification, control or abatement, and finally control verification, are presented as examples of this safety process.

  3. Towards the Integration of APECS with VE-Suite to Create a Comprehensive Virtual Engineering Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCorkle, D.; Yang, C.; Jordan, T.

    2007-06-01

    Modeling and simulation tools are becoming pervasive in the process engineering practice of designing advanced power generation facilities. These tools enable engineers to explore many what-if scenarios before cutting metal or constructing a pilot scale facility. While such tools enable investigation of crucial plant design aspects, typical commercial process simulation tools such as Aspen Plus®, gPROMS®, and HYSYS® still do not explore some plant design information, including computational fluid dynamics (CFD) models for complex thermal and fluid flow phenomena, economics models for policy decisions, operational data after the plant is constructed, and as-built information for use in as-designed models. Softwaremore » tools must be created that allow disparate sources of information to be integrated if environments are to be constructed where process simulation information can be accessed. At the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL), the Advanced Process Engineering Co-Simulator (APECS) has been developed as an integrated software suite that combines process simulation (e.g., Aspen Plus) and high-fidelity equipment simulation (e.g., Fluent® CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper, we discuss the initial phases of integrating APECS with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite utilizes the ActiveX (OLE Automation) controls in Aspen Plus wrapped by the CASI library developed by Reaction Engineering International to run the process simulation and query for unit operation results. This integration permits any application that uses the VE-Open interface to integrate with APECS co-simulations, enabling construction of the comprehensive virtual engineering environment needed for the rapid engineering of advanced power generation facilities.« less

  4. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    NASA Technical Reports Server (NTRS)

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick

    2009-01-01

    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions.

  5. On "black swans" and "perfect storms": risk analysis and management when statistics are not enough.

    PubMed

    Paté-Cornell, Elisabeth

    2012-11-01

    Two images, "black swans" and "perfect storms," have struck the public's imagination and are used--at times indiscriminately--to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure--Bayesian probability--and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near-misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow "prediction" of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines. © 2012 Society for Risk Analysis.

  6. Bioastronautics Roadmap: A Risk Reduction Strategy for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Bioastronautics Critical Path Roadmap is the framework used to identify and assess the risks to crews exposed to the hazardous environments of space. It guides the implementation of research strategies to prevent or reduce those risks. Although the BCPR identifies steps that must be taken to reduce the risks to health and performance that are associated with human space flight, the BCPR is not a "critical path" analysis in the strict engineering sense. The BCPR will evolve to accommodate new information and technology development and will enable NASA to conduct a formal critical path analysis in the future. As a management tool, the BCPR provides information for making informed decisions about research priorities and resource allocation. The outcome-driven nature of the BCPR makes it amenable for assessing the focus, progress and success of the Bioastronautics research and technology program. The BCPR is also a tool for communicating program priorities and progress to the research community and NASA management.

  7. Why is Probabilistic Seismic Hazard Analysis (PSHA) still used?

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Stark, Philip B.; Geller, Robert J.

    2017-03-01

    Even though it has never been validated by objective testing, Probabilistic Seismic Hazard Analysis (PSHA) has been widely used for almost 50 years by governments and industry in applications with lives and property hanging in the balance, such as deciding safety criteria for nuclear power plants, making official national hazard maps, developing building code requirements, and determining earthquake insurance rates. PSHA rests on assumptions now known to conflict with earthquake physics; many damaging earthquakes, including the 1988 Spitak, Armenia, event and the 2011 Tohoku, Japan, event, have occurred in regions relatively rated low-risk by PSHA hazard maps. No extant method, including PSHA, produces reliable estimates of seismic hazard. Earthquake hazard mitigation should be recognized to be inherently political, involving a tradeoff between uncertain costs and uncertain risks. Earthquake scientists, engineers, and risk managers can make important contributions to the hard problem of allocating limited resources wisely, but government officials and stakeholders must take responsibility for the risks of accidents due to natural events that exceed the adopted safety criteria.

  8. Risk Informed Design as Part of the Systems Engineering Process

    NASA Technical Reports Server (NTRS)

    Deckert, George

    2010-01-01

    This slide presentation reviews the importance of Risk Informed Design (RID) as an important feature of the systems engineering process. RID is based on the principle that risk is a design commodity such as mass, volume, cost or power. It also reviews Probabilistic Risk Assessment (PRA) as it is used in the product life cycle in the development of NASA's Constellation Program.

  9. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 2; Unsteady Analyses and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Allgood, Daniel

    2008-01-01

    Simulation technology can play an important role in rocket engine test facility design and development by assessing risks, providing analysis of dynamic pressure and thermal loads, identifying failure modes and predicting anomalous behavior of critical systems. This is especially true for facilities such as the proposed A-3 facility at NASA SSC because of a challenging operating envelope linked to variable throttle conditions at relatively low chamber pressures. Design Support of the feasibility of operating conditions and procedures is critical in such cases due to the possibility of startup/shutdown transients, moving shock structures, unsteady shock-boundary layer interactions and engine and diffuser unstart modes that can result in catastrophic failure. Analyses of such systems is difficult due to resolution requirements needed to accurately capture moving shock structures, shock-boundary layer interactions, two-phase flow regimes and engine unstart modes. In a companion paper, we will demonstrate with the use of CFD, steady analyses advanced capability to evaluate supersonic diffuser and steam ejector performance in the sub-scale A-3 facility. In this paper we will address transient issues with the operation of the facility especially at startup and shutdown, and assess risks related to afterburning due to the interaction of a fuel rich plume with oxygen that is a by-product of the steam ejectors. The primary areas that will be addressed in this paper are: (1) analyses of unstart modes due to flow transients especially during startup/ignition, (2) engine safety during the shutdown process (3) interaction of steam ejectors with the primary plume i.e. flow transients as well as probability of afterburning. In this abstract we discuss unsteady analyses of the engine shutdown process. However, the final paper will include analyses of a staged startup, drawdown of the engine test cell pressure, and risk assessment of potential afterburning in the facility. Unsteady simulations have been carried out to study the engine shutdown process in the facility and understand the physics behind the interactions between the steam ejectors, the test cell and the supersonic diffuser. As a first approximation, to understand the dominant unsteady mechanisms in the engine test cell and the supersonic diffuser, the turning duct in the facility was removed. As the engine loses power a rarefaction wave travels downstream that disrupts the shock cell structure in the supersonic diffuser. Flow from the test cell is seen to expand into the supersonic diffuser section and re-pressurizes the area around the nozzle along with a upstream traveling compression wave that emanates from near the first stage ejectors. Flow from the first stage ejector expands to the center of the duct and a new shock train is formed between the first and second stage ejectors. Both stage ejectors keep the facility pressurized and prevent any large amplitude pressure fluctuations from affecting the engine nozzle. The resultant pressure loads the nozzle experiences in the shutdown process are small.

  10. Annual Systems Engineering Conference: Focusing on Improving Performance of Defense Systems Programs (10th). Volume 3. Thursday Presentations

    DTIC Science & Technology

    2007-10-25

    the Phit <.0001 requirement) restricts tactical delivery conditions, the probability of a fragment hit may be further qualified by considering only...Pkill – UK uses “self damage” metric • Risk Analysis: “If the above procedures ( Phit or Pkill <.0001) still result in restricting tactical delivery...10 (From NAWCWD Briefing) 4 Safe Escape Analysis Requirements Calculate Phit ,Pkill, and Pdet Is Phit <= .0001 for all launch conditions Done NO YES

  11. Safety analysis report for the Waste Storage Facility. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bengston, S.J.

    1994-05-01

    This safety analysis report outlines the safety concerns associated with the Waste Storage Facility located in the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory. The three main objectives of the report are: define and document a safety basis for the Waste Storage Facility activities; demonstrate how the activities will be carried out to adequately protect the workers, public, and environment; and provide a basis for review and acceptance of the identified risk that the managers, operators, and owners will assume.

  12. Towards Coupling of Macroseismic Intensity with Structural Damage Indicators

    NASA Astrophysics Data System (ADS)

    Kouteva, Mihaela; Boshnakov, Krasimir

    2016-04-01

    Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.

  13. Quality Interaction Between Mission Assurance and Project Team Members

    NASA Technical Reports Server (NTRS)

    Kwong-Fu, Helenann H.; Wilson, Robert K.

    2006-01-01

    Mission Assurance independent assessments started during the development cycle and continued through post launch operations. In operations, Health and Safety of the Observatory is of utmost importance. Therefore, Mission Assurance must ensure requirements compliance and focus on process improvements required across the operational systems including new/modified products, tools, and procedures. The deployment of the interactive model involves three objectives: Team member Interaction, Good Root Cause Analysis Practices, and Risk Assessment to avoid reoccurrences. In applying this model, we use a metric based measurement process and was found to have the most significant effect, which points to the importance of focuses on a combination of root cause analysis and risk approaches allowing the engineers the ability to prioritize and quantify their corrective actions based on a well-defined set of root cause definitions (i.e. closure criteria for problem reports), success criteria and risk rating definitions.

  14. ESMD Risk Management Workshop: Systems Engineering and Integration Risks

    NASA Technical Reports Server (NTRS)

    Thomas, L. Dale

    2005-01-01

    This report has been developed by the National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate (ESMD) Risk Management team in close coordination with the Systems Engineering Team. This document provides a point-in-time, cumulative, summary of key lessons learned derived from the SE RFP Development process. Lessons learned invariably address challenges and risks and the way in which these areas have been addressed. Accordingly the risk management thread is woven throughout the document.

  15. Design and Demonstration of Emergency Control Modes for Enhanced Engine Performance

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A design concept is presented for developing control modes that enhance aircraft engine performance during emergency flight scenarios. The benefits of increased engine performance to overall vehicle survivability during these situations may outweigh the accompanied elevated risk of engine failure. The objective involves building control logic that can consistently increase engine performance beyond designed maximum levels based on an allowable heightened probability of failure. This concept is applied to two previously developed control modes: an overthrust mode that increases maximum engine thrust output and a faster response mode that improves thrust response to dynamic throttle commands. This paper describes the redesign of these control modes and presents simulation results demonstrating both enhanced engine performance and robust maintenance of the desired elevated risk level.

  16. Gaseous emissions from a heavy-duty engine equipped with SCR aftertreatment system and fuelled with diesel and biodiesel: assessment of pollutant dispersion and health risk.

    PubMed

    Tadano, Yara S; Borillo, Guilherme C; Godoi, Ana Flávia L; Cichon, Amanda; Silva, Thiago O B; Valebona, Fábio B; Errera, Marcelo R; Penteado Neto, Renato A; Rempel, Dennis; Martin, Lucas; Yamamoto, Carlos I; Godoi, Ricardo H M

    2014-12-01

    The changes in the composition of fuels in combination with selective catalytic reduction (SCR) emission control systems bring new insights into the emission of gaseous and particulate pollutants. The major goal of our study was to quantify NOx, NO, NO2, NH3 and N2O emissions from a four-cylinder diesel engine operated with diesel and a blend of 20% soybean biodiesel. Exhaust fume samples were collected from bench dynamometer tests using a heavy-duty diesel engine equipped with SCR. The target gases were quantified by means of Fourier transform infrared spectrometry (FTIR). The use of biodiesel blend presented lower concentrations in the exhaust fumes than using ultra-low sulfur diesel. NOx and NO concentrations were 68% to 93% lower in all experiments using SCR, when compared to no exhaust aftertreatment. All fuels increased NH3 and N2O emission due to SCR, a precursor secondary aerosol, and major greenhouse gas, respectively. An AERMOD dispersion model analysis was performed on each compound results for the City of Curitiba, assumed to have a bus fleet equipped with diesel engines and SCR system, in winter and summer seasons. The health risks of the target gases were assessed using the Risk Assessment Information System For 1-h exposure of NH3, considering the use of low sulfur diesel in buses equipped with SCR, the results indicated low risk to develop a chronic non-cancer disease. The NOx and NO emissions were the lowest when SCR was used; however, it yielded the highest NH3 concentration. The current results have paramount importance, mainly for countries that have not yet adopted the Euro V emission standards like China, India, Australia, or Russia, as well as those already adopting it. These findings are equally important for government agencies to alert the need of improvements in aftertreatment technologies to reduce pollutants emissions. Copyright © 2014. Published by Elsevier B.V.

  17. 2nd Generation RLV Risk Definition Program

    NASA Technical Reports Server (NTRS)

    Davis, Robert M.; Stucker, Mark (Technical Monitor)

    2000-01-01

    The 2nd Generation RLV Risk Reduction Mid-Term Report summarizes the status of Kelly Space & Technology's activities during the first two and one half months of the program. This report was presented to the cognoscente Contracting Officer's Technical Representative (COTR) and selected Marshall Space Flight Center staff members on 26 September 2000. The report has been approved and is distributed on CD-ROM (as a PowerPoint file) in accordance with the terms of the subject contract, and contains information and data addressing the following: (1) Launch services demand and requirements; (2) Architecture, alternatives, and requirements; (3) Costs, pricing, and business cases analysis; (4) Commercial financing requirements, plans, and strategy; (5) System engineering processes and derived requirements; and (6) RLV system trade studies and design analysis.

  18. Application and Evaluation of Control Modes for Risk-Based Engine Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Litt, Jonathan S.; Sowers, T. Shane; Owen, A. Karl (Compiler); Guo, Ten-Huei

    2014-01-01

    The engine control system for civil transport aircraft imposes operational limits on the propulsion system to ensure compliance with safety standards. However, during certain emergency situations, aircraft survivability may benefit from engine performance beyond its normal limits despite the increased risk of failure. Accordingly, control modes were developed to improve the maximum thrust output and responsiveness of a generic high-bypass turbofan engine. The algorithms were designed such that the enhanced performance would always constitute an elevation in failure risk to a consistent predefined likelihood. This paper presents an application of these risk-based control modes to a combined engine/aircraft model. Through computer and piloted simulation tests, the aim is to present a notional implementation of these modes, evaluate their effects on a generic airframe, and demonstrate their usefulness during emergency flight situations. Results show that minimal control effort is required to compensate for the changes in flight dynamics due to control mode activation. The benefits gained from enhanced engine performance for various runway incursion scenarios are investigated. Finally, the control modes are shown to protect against potential instabilities during propulsion-only flight where all aircraft control surfaces are inoperable.

  19. Application and Evaluation of Control Modes for Risk-Based Engine Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Litt, Jonathan S.; Sowers, T. Shane; Owen, A. Karl; Guo, Ten-Huei

    2015-01-01

    The engine control system for civil transport aircraft imposes operational limits on the propulsion system to ensure compliance with safety standards. However, during certain emergency situations, aircraft survivability may benefit from engine performance beyond its normal limits despite the increased risk of failure. Accordingly, control modes were developed to improve the maximum thrust output and responsiveness of a generic high-bypass turbofan engine. The algorithms were designed such that the enhanced performance would always constitute an elevation in failure risk to a consistent predefined likelihood. This paper presents an application of these risk-based control modes to a combined engine/aircraft model. Through computer and piloted simulation tests, the aim is to present a notional implementation of these modes, evaluate their effects on a generic airframe, and demonstrate their usefulness during emergency flight situations. Results show that minimal control effort is required to compensate for the changes in flight dynamics due to control mode activation. The benefits gained from enhanced engine performance for various runway incursion scenarios are investigated. Finally, the control modes are shown to protect against potential instabilities during propulsion-only flight where all aircraft control surfaces are inoperable.

  20. Probabilistic risk assessment of the Space Shuttle. Phase 3: A study of the potential of losing the vehicle during nominal operation. Volume 4: System models and data analysis

    NASA Technical Reports Server (NTRS)

    Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.

    1995-01-01

    In this volume, volume 4 (of five volumes), the discussion is focussed on the system models and related data references and has the following subsections: space shuttle main engine, integrated solid rocket booster, orbiter auxiliary power units/hydraulics, and electrical power system.

  1. Environmental (Saprozoic) Pathogens of Engineered Water Systems: Understanding Their Ecology for Risk Assessment and Management

    PubMed Central

    Ashbolt, Nicholas J.

    2015-01-01

    Major waterborne (enteric) pathogens are relatively well understood and treatment controls are effective when well managed. However, water-based, saprozoic pathogens that grow within engineered water systems (primarily within biofilms/sediments) cannot be controlled by water treatment alone prior to entry into water distribution and other engineered water systems. Growth within biofilms or as in the case of Legionella pneumophila, primarily within free-living protozoa feeding on biofilms, results from competitive advantage. Meaning, to understand how to manage water-based pathogen diseases (a sub-set of saprozoses) we need to understand the microbial ecology of biofilms; with key factors including biofilm bacterial diversity that influence amoebae hosts and members antagonistic to water-based pathogens, along with impacts from biofilm substratum, water temperature, flow conditions and disinfectant residual—all control variables. Major saprozoic pathogens covering viruses, bacteria, fungi and free-living protozoa are listed, yet today most of the recognized health burden from drinking waters is driven by legionellae, non-tuberculous mycobacteria (NTM) and, to a lesser extent, Pseudomonas aeruginosa. In developing best management practices for engineered water systems based on hazard analysis critical control point (HACCP) or water safety plan (WSP) approaches, multi-factor control strategies, based on quantitative microbial risk assessments need to be developed, to reduce disease from largely opportunistic, water-based pathogens. PMID:26102291

  2. Case Study of Engineering Risk in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Popa, Dan Mihai

    2018-03-01

    The primary objective of this paper is to show where the engineering of risk management is placed and how its implementation has been tried in multinational companies in automotive industry from Romania. A large number of companies don't use a strategy to avoid the engineering risk in their design products. The main reason is not because these companies haven't heard about standards for risk management such as ISO 31000; the problem is that the business units which were summed up, have just set up a risk list at the beginning of the project, without any follow up. The purpose of this article is to create an implementation risk tracking in automotive industry companies in Romania, due to a change request from customers according to supply companies within the quality process, in the research and development phase.

  3. Rockfall risk evaluation using geotechnical survey, remote sensing data, and GIS: a case study from western Greece

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos; Depountis, Nikolaos; Vagenas, Nikolaos; Kavoura, Katerina; Vlaxaki, Eleni; Kelasidis, George; Sabatakakis, Nikolaos

    2015-06-01

    In this paper a specific example of the synergistic use of geotechnical survey, remote sensing data and GIS for rockfall risk evaluation is presented. The study area is located in Western Greece. Extensive rockfalls have been recorded along Patras - Ioannina highway just after the cable-stayed bridge of Rio-Antirrio, at Klokova site. The rockfalls include medium- sized limestone boulders with volume up to 1.5m3. A detailed engineering geological survey was conducted including rockmass characterization, laboratory testing and geological - geotechnical mapping. Many Rockfall trajectory simulations were done. Rockfall risk along the road was estimated using spatial analysis in a GIS environment.

  4. Value-driven ERM: making ERM an engine for simultaneous value creation and value protection.

    PubMed

    Celona, John; Driver, Jeffrey; Hall, Edward

    2011-01-01

    Enterprise risk management (ERM) began as an effort to integrate the historically disparate silos of risk management in organizations. More recently, as recognition has grown of the need to cover the upside risks in value creation (financial and otherwise), organizations and practitioners have been searching for the means to do this. Existing tools such as heat maps and risk registers are not adequate for this task. Instead, a conceptually new value-driven framework is needed to realize the promise of enterprise-wide coverage of all risks, for both value protection and value creation. The methodology of decision analysis provides the means of capturing systemic, correlated, and value-creation risks on the same basis as value protection risks and has been integrated into the value-driven approach to ERM described in this article. Stanford Hospital and Clinics Risk Consulting and Strategic Decisions Group have been working to apply this value-driven ERM at Stanford University Medical Center. © 2011 American Society for Healthcare Risk Management of the American Hospital Association.

  5. Coal gasification systems engineering and analysis. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.

  6. Vulnerability survival analysis: a novel approach to vulnerability management

    NASA Astrophysics Data System (ADS)

    Farris, Katheryn A.; Sullivan, John; Cybenko, George

    2017-05-01

    Computer security vulnerabilities span across large, enterprise networks and have to be mitigated by security engineers on a routine basis. Presently, security engineers will assess their "risk posture" through quantifying the number of vulnerabilities with a high Common Vulnerability Severity Score (CVSS). Yet, little to no attention is given to the length of time by which vulnerabilities persist and survive on the network. In this paper, we review a novel approach to quantifying the length of time a vulnerability persists on the network, its time-to-death, and predictors of lower vulnerability survival rates. Our contribution is unique in that we apply the cox proportional hazards regression model to real data from an operational IT environment. This paper provides a mathematical overview of the theory behind survival analysis methods, a description of our vulnerability data, and an interpretation of the results.

  7. A Model to Assess the Risk of Ice Accretion Due to Ice Crystal Ingestion in a Turbofan Engine and its Effects on Performance

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.

    2013-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.

  8. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  9. Possible health impacts of Bt toxins and residues from spraying with complementary herbicides in genetically engineered soybeans and risk assessment as performed by the European Food Safety Authority EFSA.

    PubMed

    Then, Christoph; Bauer-Panskus, Andreas

    2017-01-01

    MON89788 was the first genetically engineered soybean worldwide to express a Bt toxin. Under the brand name Intacta, Monsanto subsequently engineered a stacked trait soybean using MON89788 and MON87701-this stacked soybean expresses an insecticidal toxin and is, in addition, tolerant to glyphosate. After undergoing risk assessment by the European Food Safety Authority (EFSA), the stacked event was authorised for import into the EU in June 2012, including for use in food and feed. This review discusses the health risks associated with Bt toxins present in these genetically engineered plants and the residues left from spraying with the complementary herbicide. We have compared the opinion published by EFSA [1] with findings from other publications in the scientific literature. It is evident that there are several issues that EFSA did not consider in detail and which will need further assessment: (1) There are potential combinatorial effects between plant components and other impact factors that might enhance toxicity. (2) It is known that Bt toxins have immunogenic properties; since soybeans naturally contain many allergens, these immunogenic properties raise specific questions. (3) Fully evaluated and reliable protocols for measuring the Bt concentration in the plants are needed, in addition to a comprehensive set of data on gene expression under varying environmental conditions. (4) Specific attention should be paid to the herbicide residues and their interaction with Bt toxins. The case of the Intacta soybeans highlights several regulatory problems with Bt soybean plants in the EU. Moreover, many of the issues raised also concern other genetically engineered plants that express insecticidal proteins, or are engineered to be resistant to herbicides, or have those two types of traits combined in stacked events. It remains a matter of debate whether the standards currently applied by the risk assessor, EFSA, and the risk manager, the EU Commission, meet the standards for risk analysis defined in EU regulations such as 1829/2003 and Directive 2001/18. While this publication cannot provide a final conclusion, it allows the development of some robust hypotheses that should be investigated further before such plants can be considered to be safe for health and the environment. In general, the concept of comparative risk assessment needs some major revision. Priority should be given to developing more targeted approaches. As shown in the case of Intacta, these approaches should include: (i) systematic investigation of interactions between the plant genome and environmental stressors as well as their impact on gene expression and plant composition; (ii) detailed investigations of the toxicity of Bt toxins; (iii) assessment of combinatorial effects taking into account long-term effects and the residues from spraying with complementary herbicides; (iv) investigation into the impact on the immune and hormonal systems and (v) investigation of the impact on the intestinal microbiome after consumption. Further and in general, stacked events displaying a high degree of complexity due to possible interactions should not undergo a lower level of risk assessment than the parental plants.

  10. Occupational exposures to engine exhausts and other PAHs and breast cancer risk: A population-based case-control study.

    PubMed

    Rai, Rajni; Glass, Deborah C; Heyworth, Jane S; Saunders, Christobel; Fritschi, Lin

    2016-06-01

    Some previous studies have suggested that exposure to engine exhausts may increase risk of breast cancer. In a population-based case-control study of breast cancer in Western Australia we assessed occupational exposure to engine exhausts using questionnaires and telephone interviews. Odds Ratios (OR) and 95% Confidence Intervals (CI) were calculated using logistic regression. We found no association between risk of breast cancer and occupational exposure to diesel exhaust (OR 1.07, 95%CI: 0.81-1.41), gasoline exhaust (OR 0.98, 95%CI: 0.74-1.28), or other exhausts (OR 1.08, 95%CI: 0.29-4.08). There were also no significant dose- or duration-response relationships. This study did not find evidence supporting the association between occupational exposures to engine exhausts and breast cancer risk. Am. J. Ind. Med. 59:437-444, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. JPL Contamination Control Engineering

    NASA Technical Reports Server (NTRS)

    Blakkolb, Brian

    2013-01-01

    JPL has extensive expertise fielding contamination sensitive missions-in house and with our NASA/industry/academic partners.t Development and implementation of performance-driven cleanliness requirements for a wide range missions and payloads - UV-Vis-IR: GALEX, Dawn, Juno, WFPC-II, AIRS, TES, et al - Propulsion, thermal control, robotic sample acquisition systems. Contamination control engineering across the mission life cycle: - System and payload requirements derivation, analysis, and contamination control implementation plans - Hardware Design, Risk trades, Requirements V-V - Assembly, Integration & Test planning and implementation - Launch site operations and launch vehicle/payload integration - Flight ops center dot Personnel on staff have expertise with space materials development and flight experiments. JPL has capabilities and expertise to successfully address contamination issues presented by space and habitable environments. JPL has extensive experience fielding and managing contamination sensitive missions. Excellent working relationship with the aerospace contamination control engineering community/.

  12. Rapid Modeling and Analysis Tools: Evolution, Status, Needs and Directions

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Stone, Thomas J.; Ransom, Jonathan B. (Technical Monitor)

    2002-01-01

    Advanced aerospace systems are becoming increasingly more complex, and customers are demanding lower cost, higher performance, and high reliability. Increased demands are placed on the design engineers to collaborate and integrate design needs and objectives early in the design process to minimize risks that may occur later in the design development stage. High performance systems require better understanding of system sensitivities much earlier in the design process to meet these goals. The knowledge, skills, intuition, and experience of an individual design engineer will need to be extended significantly for the next generation of aerospace system designs. Then a collaborative effort involving the designer, rapid and reliable analysis tools and virtual experts will result in advanced aerospace systems that are safe, reliable, and efficient. This paper discusses the evolution, status, needs and directions for rapid modeling and analysis tools for structural analysis. First, the evolution of computerized design and analysis tools is briefly described. Next, the status of representative design and analysis tools is described along with a brief statement on their functionality. Then technology advancements to achieve rapid modeling and analysis are identified. Finally, potential future directions including possible prototype configurations are proposed.

  13. Interaction of engineered nanoparticles with various components of the environment and possible strategies for their risk assessment.

    PubMed

    Bhatt, Indu; Tripathi, Bhumi Nath

    2011-01-01

    Nanoparticles are the materials with at least two dimensions between 1 and 100 nm. Mostly these nanoparticles are natural products but their tremendous commercial use has boosted the artificial synthesis of these particles (engineered nanoparticles). Accelerated production and use of these engineered nanoparticles may cause their release in the environment and facilitate the frequent interactions with biotic and abiotic components of the ecosystems. Despite remarkable commercial benefits, their presence in the nature may cause hazardous biological effects. Therefore, detail understanding of their sources, release interaction with environment, and possible risk assessment would provide a basis for safer use of engineered nanoparticles with minimal or no hazardous impact on environment. Keeping all these points in mind the present review provides updated information on various aspects, e.g. sources, different types, synthesis, interaction with environment, possible strategies for risk management of engineered nanoparticles. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Space systems engineering and risk management - joined at the hip

    NASA Technical Reports Server (NTRS)

    Rose, James R.

    2004-01-01

    This paper explores the separate skills and capabilities practiced until now, and the powerful coupling to be achieved, practically and effectively, in implementing a space mission, from inception (pre-phase A) to the end of Operations (phase E). The use of risk assessment techniques in balancing cost risk against performance risk, and the application of the systems engineering team in these trades, is the key to achieving this new implementation paradigm.

  15. Stakeholder Perceptions of Risk in Construction.

    PubMed

    Zhao, Dong; McCoy, Andrew P; Kleiner, Brian M; Mills, Thomas H; Lingard, Helen

    2016-02-01

    Safety management in construction is an integral effort and its success requires inputs from all stakeholders across design and construction phases. Effective risk mitigation relies on the concordance of all stakeholders' risk perceptions. Many researchers have noticed the discordance of risk perceptions among critical stakeholders in safe construction work, however few have provided quantifiable evidence describing them. In an effort to fill this perception gap, this research performs an experiment that investigates stakeholder perceptions of risk in construction. Data analysis confirms the existence of such discordance, and indicates a trend in risk likelihood estimation. With risk perceptions from low to high, the stakeholders are architects, contractors/safety professionals, and engineers. Including prior studies, results also suggest that designers have improved their knowledge in building construction safety, but compared to builders they present more difficultly in reaching a consensus of perception. Findings of this research are intended to be used by risk management and decision makers to reassess stakeholders' varying judgments when considering injury prevention and hazard assessment.

  16. Stakeholder Perceptions of Risk in Construction

    PubMed Central

    Zhao, Dong; McCoy, Andrew P.; Kleiner, Brian M.; Mills, Thomas H.; Lingard, Helen

    2015-01-01

    Safety management in construction is an integral effort and its success requires inputs from all stakeholders across design and construction phases. Effective risk mitigation relies on the concordance of all stakeholders’ risk perceptions. Many researchers have noticed the discordance of risk perceptions among critical stakeholders in safe construction work, however few have provided quantifiable evidence describing them. In an effort to fill this perception gap, this research performs an experiment that investigates stakeholder perceptions of risk in construction. Data analysis confirms the existence of such discordance, and indicates a trend in risk likelihood estimation. With risk perceptions from low to high, the stakeholders are architects, contractors/safety professionals, and engineers. Including prior studies, results also suggest that designers have improved their knowledge in building construction safety, but compared to builders they present more difficultly in reaching a consensus of perception. Findings of this research are intended to be used by risk management and decision makers to reassess stakeholders’ varying judgments when considering injury prevention and hazard assessment. PMID:26441481

  17. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  18. Execution of a self-directed risk assessment methodology to address HIPAA data security requirements

    NASA Astrophysics Data System (ADS)

    Coleman, Johnathan

    2003-05-01

    This paper analyzes the method and training of a self directed risk assessment methodology entitled OCTAVE (Operationally Critical Threat Asset and Vulnerability Evaluation) at over 170 DOD medical treatment facilities. It focuses specifically on how OCTAVE built interdisciplinary, inter-hierarchical consensus and enhanced local capabilities to perform Health Information Assurance. The Risk Assessment Methodology was developed by the Software Engineering Institute at Carnegie Mellon University as part of the Defense Health Information Assurance Program (DHIAP). The basis for its success is the combination of analysis of organizational practices and technological vulnerabilities. Together, these areas address the core implications behind the HIPAA Security Rule and can be used to develop Organizational Protection Strategies and Technological Mitigation Plans. A key component of OCTAVE is the inter-disciplinary composition of the analysis team (Patient Administration, IT staff and Clinician). It is this unique composition of analysis team members, along with organizational and technical analysis of business practices, assets and threats, which enables facilities to create sound and effective security policies. The Risk Assessment is conducted in-house, and therefore the process, results and knowledge remain within the organization, helping to build consensus in an environment of differing organizational and disciplinary perspectives on Health Information Assurance.

  19. Challenges Facing Design and Analysis Tools

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Broduer, Steve (Technical Monitor)

    2001-01-01

    The design and analysis of future aerospace systems will strongly rely on advanced engineering analysis tools used in combination with risk mitigation procedures. The implications of such a trend place increased demands on these tools to assess off-nominal conditions, residual strength, damage propagation, and extreme loading conditions in order to understand and quantify these effects as they affect mission success. Advances in computer hardware such as CPU processing speed, memory, secondary storage, and visualization provide significant resources for the engineer to exploit in engineering design. The challenges facing design and analysis tools fall into three primary areas. The first area involves mechanics needs such as constitutive modeling, contact and penetration simulation, crack growth prediction, damage initiation and progression prediction, transient dynamics and deployment simulations, and solution algorithms. The second area involves computational needs such as fast, robust solvers, adaptivity for model and solution strategies, control processes for concurrent, distributed computing for uncertainty assessments, and immersive technology. Traditional finite element codes still require fast direct solvers which when coupled to current CPU power enables new insight as a result of high-fidelity modeling. The third area involves decision making by the analyst. This area involves the integration and interrogation of vast amounts of information - some global in character while local details are critical and often drive the design. The proposed presentation will describe and illustrate these areas using composite structures, energy-absorbing structures, and inflatable space structures. While certain engineering approximations within the finite element model may be adequate for global response prediction, they generally are inadequate in a design setting or when local response prediction is critical. Pitfalls to be avoided and trends for emerging analysis tools will be described.

  20. An Integrated Low-Speed Performance and Noise Prediction Methodology for Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Olson, E. D.; Mavris, D. N.

    2000-01-01

    An integrated methodology has been assembled to compute the engine performance, takeoff and landing trajectories, and community noise levels for a subsonic commercial aircraft. Where feasible, physics-based noise analysis methods have been used to make the results more applicable to newer, revolutionary designs and to allow for a more direct evaluation of new technologies. The methodology is intended to be used with approximation methods and risk analysis techniques to allow for the analysis of a greater number of variable combinations while retaining the advantages of physics-based analysis. Details of the methodology are described and limited results are presented for a representative subsonic commercial aircraft.

  1. The High Stability Engine Control (HISTEC) Program: Flight Demonstration Phase

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Southwick, Robert D.; Gallops, George W.; Orme, John S.

    1998-01-01

    Future aircraft turbine engines, both commercial and military, must be able to accommodate expected increased levels of steady-state and dynamic engine-face distortion. The current approach of incorporating sufficient design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The objective of the High Stability Engine Control (HISTEC) program is to design, develop, and flight-demonstrate an advanced, integrated engine control system that uses measurement-based estimates of distortion to enhance engine stability. The resulting distortion tolerant control reduces the required design stall margin, with a corresponding increase in performance and decrease in fuel burn. The HISTEC concept has been developed and was successfully flight demonstrated on the F-15 ACTIVE aircraft during the summer of 1997. The flight demonstration was planned and carried out in two phases, the first to show distortion estimation, and the second to show distortion accommodation. Post-flight analysis shows that the HISTEC technologies are able to successfully estimate and accommodate distortion, transiently setting the stall margin requirement on-line and in real-time. This allows the design stall margin requirement to be reduced, which in turn can be traded for significantly increased performance and/or decreased weight. Flight demonstration of the HISTEC technologies has significantly reduced the risk of transitioning the technology to tactical and commercial engines.

  2. A Methodology to Support Decision Making in Flood Plan Mitigation

    NASA Astrophysics Data System (ADS)

    Biscarini, C.; di Francesco, S.; Manciola, P.

    2009-04-01

    The focus of the present document is on specific decision-making aspects of flood risk analysis. A flood is the result of runoff from rainfall in quantities too great to be confined in the low-water channels of streams. Little can be done to prevent a major flood, but we may be able to minimize damage within the flood plain of the river. This broad definition encompasses many possible mitigation measures. Floodplain management considers the integrated view of all engineering, nonstructural, and administrative measures for managing (minimizing) losses due to flooding on a comprehensive scale. The structural measures are the flood-control facilities designed according to flood characteristics and they include reservoirs, diversions, levees or dikes, and channel modifications. Flood-control measures that modify the damage susceptibility of floodplains are usually referred to as nonstructural measures and may require minor engineering works. On the other hand, those measures designed to modify the damage potential of permanent facilities are called non-structural and allow reducing potential damage during a flood event. Technical information is required to support the tasks of problem definition, plan formulation, and plan evaluation. The specific information needed and the related level of detail are dependent on the nature of the problem, the potential solutions, and the sensitivity of the findings to the basic information. Actions performed to set up and lay out the study are preliminary to the detailed analysis. They include: defining the study scope and detail, the field data collection, a review of previous studies and reports, and the assembly of needed maps and surveys. Risk analysis can be viewed as having many components: risk assessment, risk communication and risk management. Risk assessment comprises an analysis of the technical aspects of the problem, risk communication deals with conveying the information and risk management involves the decision process. In the present paper we propose a novel methodology for supporting the priority setting in the assessment of such issues, beyond the typical "expected value" approach. Scientific contribution and management aspects are merged to create a simplified method for plan basin implementation, based on risk and economic analyses. However, the economic evaluation is not the sole criterion for flood-damage reduction plan selection. Among the different criteria that are relevant to the decision process, safety and quality of human life, economic damage, expenses related with the chosen measures and environmental issues should play a fundamental role on the decisions made by the authorities. Some numerical indices, taking in account administrative, technical, economical and risk aspects, are defined and are combined together in a mathematical formula that defines a Priority Index (PI). In particular, the priority index defines a ranking of priority interventions, thus allowing the formulation of the investment plan. The research is mainly focused on the technical factors of risk assessment, providing quantitative and qualitative estimates of possible alternatives, containing measures of the risk associated with those alternatives. Moreover, the issues of risk management are analyzed, in particular with respect to the role of decision making in the presence of risk information. However, a great effort is devoted to make this index easy to be formulated and effective to allow a clear and transparent comparison between the alternatives. Summarizing this document describes a major- steps for incorporation of risk analysis into the decision making process: framing of the problem in terms of risk analysis, application of appropriate tools and techniques to obtain quantified results, use of the quantified results in the choice of structural and non-structural measures. In order to prove the reliability of the proposed methodology and to show how risk-based information can be incorporated into a flood analysis process, its application to some middle italy river basins is presented. The methodology assessment is performed by comparing different scenarios and showing that the optimal decision stems from a feasibility evaluation.

  3. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  4. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  5. Introducing a Transdisciplinary Approach in Studies regarding Risk Assessment and Management in Educational Programs for Environmental Engineers and Planners

    ERIC Educational Resources Information Center

    Menoni, Scira

    2006-01-01

    Purpose: The purpose of this paper is to discuss how long term risk prevention and civil protection may enter in university programs for environmental engineers and urban and regional planners. Design/methodology/approach: First the distinction between long term risk prevention and emergency preparedness is made, showing that while the first has…

  6. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    NASA Astrophysics Data System (ADS)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  7. Review On Feasibility of Using Satellite Imaging for Risk Management of Derailment Related Turnout Component Failures

    NASA Astrophysics Data System (ADS)

    Dindar, Serdar; Kaewunruen, Sakdirat; Osman, Mohd H.

    2017-10-01

    One of the emerging significant advances in engineering, satellite imaging (SI) is becoming very common in any kind of civil engineering projects e.g., bridge, canal, dam, earthworks, power plant, water works etc., to provide an accurate, economical and expeditious means of acquiring a rapid assessment. Satellite imaging services in general utilise combinations of high quality satellite imagery, image processing and interpretation to obtain specific required information, e.g. surface movement analysis. To extract, manipulate and provide such a precise knowledge, several systems, including geographic information systems (GIS) and global positioning system (GPS), are generally used for orthorectification. Although such systems are useful for mitigating risk from projects, their productiveness is arguable and operational risk after application is open to discussion. As the applicability of any novel application to the railway industry is often measured in terms of whether or not it has gained in-depth knowledge and to what degree, as a result of errors during its operation, this novel application generates risk in ongoing projects. This study reviews what can be achievable for risk management of railway turnouts thorough satellite imaging. The methodology is established on the basis of other published articles in this area and the results of applications to understand how applicable such imagining process is on railway turnouts, and how sub-systems in turnouts can be effectively traced/operated with less risk than at present. As a result of this review study, it is aimed that the railway sector better understands risk mitigation in particular applications.

  8. Work stress and subsequent risk of internet addiction among information technology engineers in Taiwan.

    PubMed

    Chen, Sung-Wei; Gau, Susan Shur-Fen; Pikhart, Hynek; Peasey, Anne; Chen, Shih-Tse; Tsai, Ming-Chen

    2014-08-01

    Work stress, as defined by the Demand-Control-Support (DCS) model and the Effort-Reward Imbalance (ERI) model, has been found to predict risks for depression, anxiety, and substance addictions, but little research is available on work stress and Internet addiction. The aims of this study are to assess whether the DCS and ERI models predict subsequent risks of Internet addiction, and to examine whether these associations might be mediated by depression and anxiety. A longitudinal study was conducted in a sample (N=2,550) of 21-55 year old information technology engineers without Internet addiction. Data collection included questionnaires covering work stress, demographic factors, psychosocial factors, substance addictions, Internet-related factors, depression and anxiety at wave 1, and the Internet Addiction Test (IAT) at wave 2. Ordinal logistic regression was used to assess the associations between work stress and IAT; path analysis was adopted to evaluate potentially mediating roles of depression and anxiety. After 6.2 months of follow-up, 14.0% of subjects became problematic Internet users (IAT 40-69) and 4.1% pathological Internet users (IAT 70-100). Job strain was associated with an increased risk of Internet addiction (odds ratio [OR] of having a higher IAT outcome vs. a lower outcome was 1.53); high work social support reduced the risk of Internet addiction (OR=0.62). High ER ratio (OR=1.61) and high overcommitment (OR=1.68) were associated with increased risks of Internet addiction. Work stress defined by the DCS and ERI models predicted subsequent risks of Internet addiction.

  9. Investigating incidents of EHR failures in China: analysis of search engine reports.

    PubMed

    Lei, Jianbo; Guan, Pengcheng; Gao, Kaihua; Lu, Xueqing; Sittig, Dean

    2013-01-01

    As the healthcare industry becomes increasingly dependent on information technology (IT), the failure of computerized systems could cause catastrophic effects on patient safety. We conducted an empirical study to analyze news articles available on the internet using Baidu and Google. 116 distinct EHR outage news reports were identified. We examined characteristics, potential causes, and possible preventive strategies. Risk management strategies based are discussed.

  10. Preliminary candidate advanced avionics system for general aviation

    NASA Technical Reports Server (NTRS)

    Mccalla, T. M.; Grismore, F. L.; Greatline, S. E.; Birkhead, L. M.

    1977-01-01

    An integrated avionics system design was carried out to the level which indicates subsystem function, and the methods of overall system integration. Sufficient detail was included to allow identification of possible system component technologies, and to perform reliability, modularity, maintainability, cost, and risk analysis upon the system design. Retrofit to older aircraft, availability of this system to the single engine two place aircraft, was considered.

  11. Using trading zones and life cycle analysis to understand nanotechnology regulation.

    PubMed

    Wardak, Ahson; Gorman, Michael E

    2006-01-01

    This article reviews the public health and environmental regulations applicable to nanotechnology using a life cycle model from basic research through end-of-life for products. Given nanotechnology's immense promise and public investment, regulations are important, balancing risk with the public good. Trading zones and earth systems engineering management assist in explaining potential solutions to gaps in an otherwise complex, overlapping regulatory system.

  12. A Risk Assessment Architecture for Enhanced Engine Operation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Sharp. Lauren M.; Guo, Ten-Huei

    2010-01-01

    On very rare occasions, in-flight emergencies have occurred that required the pilot to utilize the aircraft's capabilities to the fullest extent possible, sometimes using actuators in ways for which they were not intended. For instance, when flight control has been lost due to damage to the hydraulic systems, pilots have had to use engine thrust to maneuver the plane to the ground and in for a landing. To assist the pilot in these situations, research is being performed to enhance the engine operation by making it more responsive or able to generate more thrust. Enabled by modification of the propulsion control, enhanced engine operation can increase the probability of a safe landing during an inflight emergency. However, enhanced engine operation introduces risk as the nominal control limits, such as those on shaft speed, temperature, and acceleration, are exceeded. Therefore, an on-line tool for quantifying this risk must be developed to ensure that the use of an enhanced control mode does not actually increase the overall danger to the aircraft. This paper describes an architecture for the implementation of this tool. It describes the type of data and algorithms required and the information flow, and how the risk based on engine component lifing and operability for enhanced operation is determined.

  13. 14 CFR 23.903 - Engines.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... engine compartment) of any system that can affect an engine (other than a fuel tank if only one fuel tank...) Starting and stopping (piston engine). (1) The design of the installation must be such that risk of fire or...

  14. 14 CFR 23.903 - Engines.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... engine compartment) of any system that can affect an engine (other than a fuel tank if only one fuel tank...) Starting and stopping (piston engine). (1) The design of the installation must be such that risk of fire or...

  15. Risk assessment techniques with applicability in marine engineering

    NASA Astrophysics Data System (ADS)

    Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.

    2015-11-01

    Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.

  16. Pahoa geothermal industrial park. Engineering and economic analysis for direct applications of geothermal energy in an industrial park at Pahoa, Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreau, J.W.

    1980-12-01

    This engineering and economic study evaluated the potential for developing a geothermal industrial park in the Puna District near Pahoa on the Island of Hawaii. Direct heat industrial applications were analyzed from a marketing, engineering, economic, environmental, and sociological standpoint to determine the most viable industries for the park. An extensive literature search produced 31 existing processes currently using geothermal heat. An additional list was compiled indicating industrial processes that require heat that could be provided by geothermal energy. From this information, 17 possible processes were selected for consideration. Careful scrutiny and analysis of these 17 processes revealed three thatmore » justified detailed economic workups. The three processes chosen for detailed analysis were: an ethanol plant using bagasse and wood as feedstock; a cattle feed mill using sugar cane leaf trash as feedstock; and a papaya processing facility providing both fresh and processed fruit. In addition, a research facility to assess and develop other processes was treated as a concept. Consideration was given to the impediments to development, the engineering process requirements and the governmental support for each process. The study describes the geothermal well site chosen, the pipeline to transmit the hydrothermal fluid, and the infrastructure required for the industrial park. A conceptual development plan for the ethanol plant, the feedmill and the papaya processing facility was prepared. The study concluded that a direct heat industrial park in Pahoa, Hawaii, involves considerable risks.« less

  17. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  18. Intertwining Risk Insights and Design Decisions

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Jenkins, J. Steven

    2006-01-01

    The state of systems engineering is such that a form of early and continued use of risk assessments is conducted (as evidenced by NASA's adoption and use of the 'Continuous Risk Management' paradigm developed by SEI). ... However, these practices fall short of theideal: (1) Integration between risk assessment techniques and other systems engineering tools is weak. (2) Risk assessment techniques and the insights they yield are only informally coupled to design decisions. (3) Individual riskassessment techniques lack the mix of breadth, fidelity and agility required to span the gamut of the design space. In this paper we present an approach that addresses these shortcomings. The hallmark of our approach is a simple representation comprising objectives (what the system is to do), risks (whose occurrence would detract from attainment of objectives) and activities (a.k.a. 'mitigations') that, if performed, will decrease those risks. These are linked to indicate by how much a risk would detract from attainment of an objective, and by how much an activity would reduce a risk. The simplicity of our representational framework gives it the breadth to encompass the gamut of the design space concerns, the agility to be utilized in even the earliest phases of designs, and the capability to connect to system engineering models and higher-fidelity risk tools. It is through this integration that we address the shortcomings listed above, and so achieve the intertwining between risk insights and design decisions needed to guide systems engineering towards superior final designs while avoiding costly rework to achieve them. The paper will use an example, constructed to be representative of space mission design, to illustrate our approach.

  19. NURail Research Experience for Undergraduates (REU) Summer Program in Multimodal Freight Transportation Risk.

    DOT National Transportation Integrated Search

    2013-08-01

    NURail hosted an REU Summer Program in Multimodal Freight Transportation Risk at the Rail Transportation and Engineering Center (RailTEC) in the Department of Civil and Environmental Engineering at the University of Illinois at Urbana-Champaign (UIUC...

  20. An Integrated Approach for Physical and Cyber Security Risk Assessment: The U.S. Army Corps of Engineers Common Risk Model for Dams

    DTIC Science & Technology

    2016-07-01

    Common Risk Model for Dams ( CRM -D) Methodology,” for the Director, Cost Assessment and Program Evaluation, Office of Secretary of Defense and the...for Dams ( CRM -D), developed by the U.S. Army Corps of Engineers (USACE) in collaboration with the Institute for Defense Analyses (IDA) and the U.S...and cyber security risks across a portfolio of dams, and informing decisions on how to mitigate those risks. The CRM -D can effectively quantify the

  1. Gaming, texting, learning? Teaching engineering ethics through students' lived experiences with technology.

    PubMed

    Voss, Georgina

    2013-09-01

    This paper examines how young peoples' lived experiences with personal technologies can be used to teach engineering ethics in a way which facilitates greater engagement with the subject. Engineering ethics can be challenging to teach: as a form of practical ethics, it is framed around future workplace experience in a professional setting which students are assumed to have no prior experience of. Yet the current generations of engineering students, who have been described as 'digital natives', do however have immersive personal experience with digital technologies; and experiential learning theory describes how students learn ethics more successfully when they can draw on personal experience which give context and meaning to abstract theories. This paper reviews current teaching practices in engineering ethics; and examines young people's engagement with technologies including cell phones, social networking sites, digital music and computer games to identify social and ethical elements of these practices which have relevance for the engineering ethics curricula. From this analysis three case studies are developed to illustrate how facets of the use of these technologies can be drawn on to teach topics including group work and communication; risk and safety; and engineering as social experimentation. Means for bridging personal experience and professional ethics when teaching these cases are discussed. The paper contributes to research and curriculum development in engineering ethics education, and to wider education research about methods of teaching 'the net generation'.

  2. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.

  3. Fuel-injector/air-swirl characterization

    NASA Technical Reports Server (NTRS)

    Mcvey, J. B.; Kennedy, J. B.; Bennett, J. C.

    1985-01-01

    The objectives of this program are to establish an experimental data base documenting the behavior of gas turbine engine fuel injector sprays as the spray interacts with the swirling gas flow existing in the combustor dome, and to conduct an assessment of the validity of current analytical techniques for predicting fuel spray behavior. Emphasis is placed on the acquisition of data using injector/swirler components which closely resemble components currently in use in advanced aircraft gas turbine engines, conducting tests under conditions that closely simulate or closely approximate those developed in actual combustors, and conducting a well-controlled experimental effort which will comprise using a combination of low-risk experiments and experiments requiring the use of state-of-the-art diagnostic instrumentation. Analysis of the data is to be conducted using an existing, TEACH-type code which employs a stochastic analysis of the motion of the dispersed phase in the turbulent continuum flow field.

  4. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  5. Climatic and psychosocial risks of heat illness incidents on construction site.

    PubMed

    Jia, Yunyan Andrea; Rowlinson, Steve; Ciccarelli, Marina

    2016-03-01

    The study presented in this paper aims to identify prominent risks leading to heat illness in summer among construction workers that can be prioritised for developing effective interventions. Samples are 216 construction workers' cases at the individual level and 26 construction projects cases at the organisation level. A grounded theory is generated to define the climatic heat and psychosocial risks and the relationships between risks, timing and effectiveness of interventions. The theoretical framework is then used to guide content analysis of 36 individual onsite heat illness cases to identify prominent risks. The results suggest that heat stress risks on construction site are socially constructed and can be effectively managed through elimination at supply chain level, effective engineering control, proactive control of the risks through individual interventions and reactive control through mindful recognition and response to early symptoms. The role of management infrastructure as a base for effective interventions is discussed. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Preliminary Analysis of Aircraft Loss of Control Accidents: Worst Case Precursor Combinations and Temporal Sequencing

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Groff, Loren; Newman, Richard L.; Foster, John V.; Crider, Dennis H.; Klyde, David H.; Huston, A. McCall

    2014-01-01

    Aircraft loss of control (LOC) is a leading cause of fatal accidents across all transport airplane and operational classes, and can result from a wide spectrum of hazards, often occurring in combination. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of conditions and uncertainties, including multiple hazards, and their validation must provide a means of assessing system effectiveness and coverage of these hazards. This requires the definition of a comprehensive set of LOC test scenarios based on accident and incident data as well as future risks. This paper defines a comprehensive set of accidents and incidents over a recent 15 year period, and presents preliminary analysis results to identify worst-case combinations of causal and contributing factors (i.e., accident precursors) and how they sequence in time. Such analyses can provide insight in developing effective solutions for LOC, and form the basis for developing test scenarios that can be used in evaluating them. Preliminary findings based on the results of this paper indicate that system failures or malfunctions, crew actions or inactions, vehicle impairment conditions, and vehicle upsets contributed the most to accidents and fatalities, followed by inclement weather or atmospheric disturbances and poor visibility. Follow-on research will include finalizing the analysis through a team consensus process, defining future risks, and developing a comprehensive set of test scenarios with correlation to the accidents, incidents, and future risks. Since enhanced engineering simulations are required for batch and piloted evaluations under realistic LOC precursor conditions, these test scenarios can also serve as a high-level requirement for defining the engineering simulation enhancements needed for generating them.

  7. Advanced Gas Turbine (AGT) powertrain system development for automotive applications report

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This report describes progress and work performed during January through June 1984 to develop technology for an Advanced Gas Turbine (AGT) engine for automotive applications. Work performed during the first eight periods initiated design and analysis, ceramic development, component testing, and test bed evaluation. Project effort conducted under this contract is part of the DOE Gas Turbine Highway Vehicle System Program. This program is oriented at providing the United States automotive industry the high-risk long-range techology necessary to produce gas turbine engines for automobiles with reduced fuel consumption and reduced environmental impact. Technology resulting from this program is intended to reach the marketplace by the early 1990s.

  8. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  9. 49 CFR Appendix B to Part 222 - Alternative Safety Measures

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Engineering ASMs, and Engineering ASMs. Modified SSMs are SSMs that do not fully comply with the provisions... reduction credit for pre-existing modified SSMs under the final rule. Non-engineering ASMs consist of... reduce risk within a quiet zone. Engineering ASMs consist of engineering improvements that address...

  10. 49 CFR Appendix B to Part 222 - Alternative Safety Measures

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Engineering ASMs, and Engineering ASMs. Modified SSMs are SSMs that do not fully comply with the provisions... reduction credit for pre-existing modified SSMs under the final rule. Non-engineering ASMs consist of... reduce risk within a quiet zone. Engineering ASMs consist of engineering improvements that address...

  11. Conceptual Launch Vehicle and Spacecraft Design for Risk Assessment

    NASA Technical Reports Server (NTRS)

    Motiwala, Samira A.; Mathias, Donovan L.; Mattenberger, Christopher J.

    2014-01-01

    One of the most challenging aspects of developing human space launch and exploration systems is minimizing and mitigating the many potential risk factors to ensure the safest possible design while also meeting the required cost, weight, and performance criteria. In order to accomplish this, effective risk analyses and trade studies are needed to identify key risk drivers, dependencies, and sensitivities as the design evolves. The Engineering Risk Assessment (ERA) team at NASA Ames Research Center (ARC) develops advanced risk analysis approaches, models, and tools to provide such meaningful risk and reliability data throughout vehicle development. The goal of the project presented in this memorandum is to design a generic launch 7 vehicle and spacecraft architecture that can be used to develop and demonstrate these new risk analysis techniques without relying on other proprietary or sensitive vehicle designs. To accomplish this, initial spacecraft and launch vehicle (LV) designs were established using historical sizing relationships for a mission delivering four crewmembers and equipment to the International Space Station (ISS). Mass-estimating relationships (MERs) were used to size the crew capsule and launch vehicle, and a combination of optimization techniques and iterative design processes were employed to determine a possible two-stage-to-orbit (TSTO) launch trajectory into a 350-kilometer orbit. Primary subsystems were also designed for the crewed capsule architecture, based on a 24-hour on-orbit mission with a 7-day contingency. Safety analysis was also performed to identify major risks to crew survivability and assess the system's overall reliability. These procedures and analyses validate that the architecture's basic design and performance are reasonable to be used for risk trade studies. While the vehicle designs presented are not intended to represent a viable architecture, they will provide a valuable initial platform for developing and demonstrating innovative risk assessment capabilities.

  12. Risk, Robustness and Water Resources Planning Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Borgomeo, Edoardo; Mortazavi-Naeini, Mohammad; Hall, Jim W.; Guillod, Benoit P.

    2018-03-01

    Risk-based water resources planning is based on the premise that water managers should invest up to the point where the marginal benefit of risk reduction equals the marginal cost of achieving that benefit. However, this cost-benefit approach may not guarantee robustness under uncertain future conditions, for instance under climatic changes. In this paper, we expand risk-based decision analysis to explore possible ways of enhancing robustness in engineered water resources systems under different risk attitudes. Risk is measured as the expected annual cost of water use restrictions, while robustness is interpreted in the decision-theoretic sense as the ability of a water resource system to maintain performance—expressed as a tolerable risk of water use restrictions—under a wide range of possible future conditions. Linking risk attitudes with robustness allows stakeholders to explicitly trade-off incremental increases in robustness with investment costs for a given level of risk. We illustrate the framework through a case study of London's water supply system using state-of-the -art regional climate simulations to inform the estimation of risk and robustness.

  13. Continuous Risk Management at NASA

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore F.; Rosenberg, Linda

    1999-01-01

    NPG 7120.5A, "NASA Program and Project Management Processes and Requirements" enacted in April, 1998, requires that "The program or project manager shall apply risk management principles..." The Software Assurance Technology Center (SATC) at NASA GSFC has been tasked with the responsibility for developing and teaching a systems level course for risk management that provides information on how to comply with this edict. The course was developed in conjunction with the Software Engineering Institute at Carnegie Mellon University, then tailored to the NASA systems community. This presentation will briefly discuss the six functions for risk management: (1) Identify the risks in a specific format; (2) Analyze the risk probability, impact/severity, and timeframe; (3) Plan the approach; (4) Track the risk through data compilation and analysis; (5) Control and monitor the risk; (6) Communicate and document the process and decisions. This risk management structure of functions has been taught to projects at all NASA Centers and is being successfully implemented on many projects. This presentation will give project managers the information they need to understand if risk management is to be effectively implemented on their projects at a cost they can afford.

  14. Effective standards and regulatory tools for respiratory gas monitors and pulse oximeters: the role of the engineer and clinician.

    PubMed

    Weininger, Sandy

    2007-12-01

    Developing safe and effective medical devices involves understanding the hazardous situations that can arise in clinical practice and implementing appropriate risk control measures. The hazardous situations may have their roots in the design or in the use of the device. Risk control measures may be engineering or clinically based. A multidisciplinary team of engineers and clinicians is needed to fully identify and assess the risks and implement and evaluate the effectiveness of the control measures. In this paper, I use three issues, calibration/accuracy, response time, and protective measures/alarms, to highlight the contributions of these groups. This important information is captured in standards and regulatory tools to control risk for respiratory gas monitors and pulse oximeters. This paper begins with a discussion of the framework of safety, explaining how voluntary standards and regulatory tools work. The discussion is followed by an examination of how engineering and clinical knowledge are used to support the assurance of safety.

  15. Satellite-instrument system engineering best practices and lessons

    NASA Astrophysics Data System (ADS)

    Schueler, Carl F.

    2009-08-01

    This paper focuses on system engineering development issues driving satellite remote sensing instrumentation cost and schedule. A key best practice is early assessment of mission and instrumentation requirements priorities driving performance trades among major instrumentation measurements: Radiometry, spatial field of view and image quality, and spectral performance. Key lessons include attention to technology availability and applicability to prioritized requirements, care in applying heritage, approaching fixed-price and cost-plus contracts with appropriate attention to risk, and assessing design options with attention to customer preference as well as design performance, and development cost and schedule. A key element of success either in contract competition or execution is team experience. Perhaps the most crucial aspect of success, however, is thorough requirements analysis and flowdown to specifications driving design performance with sufficient parameter margin to allow for mistakes or oversights - the province of system engineering from design inception to development, test and delivery.

  16. Numerical Simulation of the RTA Combustion Rig

    NASA Technical Reports Server (NTRS)

    Davoudzadeh, Farhad; Buehrle, Robert; Liu, Nan-Suey; Winslow, Ralph

    2005-01-01

    The Revolutionary Turbine Accelerator (RTA)/Turbine Based Combined Cycle (TBCC) project is investigating turbine-based propulsion systems for access to space. NASA Glenn Research Center and GE Aircraft Engines (GEAE) planned to develop a ground demonstrator engine for validation testing. The demonstrator (RTA-1) is a variable cycle, turbofan ramjet designed to transition from an augmented turbofan to a ramjet that produces the thrust required to accelerate the vehicle from Sea Level Static (SLS) to Mach 4. The RTA-1 is designed to accommodate a large variation in bypass ratios from sea level static to Mach 4 conditions. Key components of this engine are new, such as a nickel alloy fan, advanced trapped vortex combustor, a Variable Area Bypass Injector (VABI), radial flameholders, and multiple fueling zones. A means to mitigate risks to the RTA development program was the use of extensive component rig tests and computational fluid dynamics (CFD) analysis.

  17. SECONDARY WASTE/ETF (EFFLUENT TREATMENT FACILITY) PRELIMINARY PRE-CONCEPTUAL ENGINEERING STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAY TH; GEHNER PD; STEGEN GARY

    2009-12-28

    This pre-conceptual engineering study is intended to assist in supporting the critical decision (CD) 0 milestone by providing a basis for the justification of mission need (JMN) for the handling and disposal of liquid effluents. The ETF baseline strategy, to accommodate (WTP) requirements, calls for a solidification treatment unit (STU) to be added to the ETF to provide the needed additional processing capability. This STU is to process the ETF evaporator concentrate into a cement-based waste form. The cementitious waste will be cast into blocks for curing, storage, and disposal. Tis pre-conceptual engineering study explores this baseline strategy, in additionmore » to other potential alternatives, for meeting the ETF future mission needs. Within each reviewed case study, a technical and facility description is outlined, along with a preliminary cost analysis and the associated risks and benefits.« less

  18. Engineered passive bioreactive barriers: risk-managing the legacy of industrial soil and groundwater pollution.

    PubMed

    Kalin, Robert M

    2004-06-01

    Permeable reactive barriers are a technology that is one decade old, with most full-scale applications based on abiotic mechanisms. Though there is extensive literature on engineered bioreactors, natural biodegradation potential, and in situ remediation, it is only recently that engineered passive bioreactive barrier technology is being considered at the commercial scale to manage contaminated soil and groundwater risks. Recent full-scale studies are providing the scientific confidence in our understanding of coupled microbial (and genetic), hydrogeologic, and geochemical processes in this approach and have highlighted the need to further integrate engineering and science tools.

  19. Performance of the UK Prospective Diabetes Study Risk Engine and the Framingham Risk Equations in Estimating Cardiovascular Disease in the EPIC- Norfolk Cohort

    PubMed Central

    Simmons, Rebecca K.; Coleman, Ruth L.; Price, Hermione C.; Holman, Rury R.; Khaw, Kay-Tee; Wareham, Nicholas J.; Griffin, Simon J.

    2009-01-01

    OBJECTIVE The purpose of this study was to examine the performance of the UK Prospective Diabetes Study (UKPDS) Risk Engine (version 3) and the Framingham risk equations (2008) in estimating cardiovascular disease (CVD) incidence in three populations: 1) individuals with known diabetes; 2) individuals with nondiabetic hyperglycemia, defined as A1C ≥6.0%; and 3) individuals with normoglycemia defined as A1C <6.0%. RESEARCH DESIGN AND METHODS This was a population-based prospective cohort (European Prospective Investigation of Cancer-Norfolk). Participants aged 40–79 years recruited from U.K. general practices attended a health examination (1993–1998) and were followed for CVD events/death until April 2007. CVD risk estimates were calculated for 10,137 individuals. RESULTS Over 10.1 years, there were 69 CVD events in the diabetes group (25.4%), 160 in the hyperglycemia group (17.7%), and 732 in the normoglycemia group (8.2%). Estimated CVD 10-year risk in the diabetes group was 33 and 37% using the UKPDS and Framingham equations, respectively. In the hyperglycemia group, estimated CVD risks were 31 and 22%, respectively, and for the normoglycemia group risks were 20 and 14%, respectively. There were no significant differences in the ability of the risk equations to discriminate between individuals at different risk of CVD events in each subgroup; both equations overestimated CVD risk. The Framingham equations performed better in the hyperglycemia and normoglycemia groups as they did not overestimate risk as much as the UKPDS Risk Engine, and they classified more participants correctly. CONCLUSIONS Both the UKPDS Risk Engine and Framingham risk equations were moderately effective at ranking individuals and are therefore suitable for resource prioritization. However, both overestimated true risk, which is important when one is using scores to communicate prognostic information to individuals. PMID:19114615

  20. Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions

    EPA Science Inventory

    Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...

  1. An integrated science-based methodology to assess potential risks and implications of engineered nanomaterials

    EPA Science Inventory

    There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology ...

  2. An integrated science-based methodology to assess potential risks and implications of engineered nanomaterials.

    PubMed

    Tolaymat, Thabet; El Badawy, Amro; Sequeira, Reynold; Genaidy, Ash

    2015-11-15

    There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture "what is known" and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. Published by Elsevier B.V.

  3. The Aging of Engines: An Operator’s Perspective

    DTIC Science & Technology

    2000-10-01

    internal HCF failures of blades . Erosion of compressor gas path 2-3 components can be minimized through the use of inlet aluminide intermetallic...fatigue problems in the dovetails durability in accelerated burner rig tests [2,35]. areas of titanium alloy fan and compressor blades . Shot peening in...Criticality Analysis replacement of durability-critical components, such as FOD Foreign object damage blades and vanes. The need to balance risk and escalating

  4. Program Affordability Tradeoffs

    DTIC Science & Technology

    2016-04-30

    engineering, trade -studies, and risk assessment and management for a variety of civilian and DoD sponsors. She holds a master’s degree in mathematics...mph in as little as 2.8 seconds. Prius Model 2 fuel economy (MPG): 54 - 58 city | 50 - 53 highway Trade -Off Analysis Costs: $80,000...affordability trades  What is the impact to goals/missions/objectives of pursuing a lower cost, lower performing alternative? Is this impact

  5. Analysis and Design of Complex Network Environments

    DTIC Science & Technology

    2012-03-01

    and J. Lowe, “The myths and facts behind cyber security risks for industrial control systems ,” in the Proceedings of the VDE Kongress, VDE Congress...questions about 1) how to model them, 2) the design of experiments necessary to discover their structure (and thus adapt system inputs to optimize the...theoretical work that clarifies fundamental limitations of complex networks with network engineering and systems biology to implement specific designs and

  6. Architecture and Assessment: Privacy Preserving Biometrically Secured Electronic Documents

    DTIC Science & Technology

    2015-01-01

    very large public and private fingerprint databases comprehensive risk analysis and system security contribution to developing international ...Safety and Security Program which is led by Defence Research and Development Canada’s Centre for Security Science, in partnership with Public Safety...201 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 201 Science and Engineering

  7. Risk Assessment and Scaling for the SLS LH2 ET

    NASA Technical Reports Server (NTRS)

    Hafiychuk, Halyna; Ponizovskaya-Devine, Ekaterina; Luchinsky, Dmitry; Khasin, Michael; Osipov, Viatcheslav V.; Smelyanskiy, Vadim N.

    2012-01-01

    In this report the main physics processes in LH2 tank during prepress and rocket flight are studied. The goal of this investigation is to analyze possible hazards and to make risk assessment in proposed LH2 tank designs for SLS with 5 engines (the situation with 4 engines is less critical). For analysis we use the multinode model (MNM) developed by us and presented in a separate report and also 3D ANSYS simulations. We carry out simulation and theoretical analysis the physics processes such as (i) accumulation of bubbles in LH2 during replenish stage and their collapsing in the liquid during the prepress; (ii) condensation-evaporation at the liquid-vapor interface and tank wall, (iv) heating the liquid near the interface and wall due to condensation and environment heat, (v) injection of hot He during prepress and of hot GH2 during flight, (vi) mixing and cooling of the injected gases due to heat transfer between the gases, liquid and the tank wall. We analyze the effects of these physical processes on the thermo- and fluid gas dynamics in the ullage and on the stratification of temperature in the liquid and assess the associated hazards. A special emphasize is put on the scaling predictions for the larger SLS LH2 tank.

  8. Risk-Informed Decision Making: Application to Technology Development Alternative Selection

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Maggio, Gaspare; Everett, Christopher

    2010-01-01

    NASA NPR 8000.4A, Agency Risk Management Procedural Requirements, defines risk management in terms of two complementary processes: Risk-informed Decision Making (RIDM) and Continuous Risk Management (CRM). The RIDM process is used to inform decision making by emphasizing proper use of risk analysis to make decisions that impact all mission execution domains (e.g., safety, technical, cost, and schedule) for program/projects and mission support organizations. The RIDM process supports the selection of an alternative prior to program commitment. The CRM process is used to manage risk associated with the implementation of the selected alternative. The two processes work together to foster proactive risk management at NASA. The Office of Safety and Mission Assurance at NASA Headquarters has developed a technical handbook to provide guidance for implementing the RIDM process in the context of NASA risk management and systems engineering. This paper summarizes the key concepts and procedures of the RIDM process as presented in the handbook, and also illustrates how the RIDM process can be applied to the selection of technology investments as NASA's new technology development programs are initiated.

  9. Probabilistic Scenario-based Seismic Risk Analysis for Critical Infrastructures Method and Application for a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Klügel, J.

    2006-12-01

    Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.

  10. Exposure to diesel and gasoline engine emissions and the risk of lung cancer.

    PubMed

    Parent, Marie-Elise; Rousseau, Marie-Claude; Boffetta, Paolo; Cohen, Aaron; Siemiatycki, Jack

    2007-01-01

    Pollution from motor vehicles constitutes a major environmental health problem. The present paper describes associations between diesel and gasoline engine emissions and lung cancer, as evidenced in a 1979-1985 population-based case-control study in Montreal, Canada. Cases were 857 male lung cancer patients. Controls were 533 population controls and 1,349 patients with other cancer types. Subjects were interviewed to obtain a detailed lifetime job history and relevant data on potential confounders. Industrial hygienists translated each job description into indices of exposure to several agents, including engine emissions. There was no evidence of excess risks of lung cancer with exposure to gasoline exhaust. For diesel engine emissions, results differed by control group. When cancer controls were considered, there was no excess risk. When population controls were studied, the odds ratios, after adjustments for potential confounders, were 1.2 (95% confidence interval: 0.8, 1.8) for any exposure and 1.6 (95% confidence interval: 0.9, 2.8) for substantial exposure. Confidence intervals between risk estimates derived from the two control groups overlapped considerably. These results provide some limited support for the hypothesis of an excess lung cancer risk due to diesel exhaust but no support for an increase in risk due to gasoline exhaust.

  11. 2014 Space Human Factors Engineering Standing Review Panel

    NASA Technical Reports Server (NTRS)

    Steinberg, Susan

    2014-01-01

    The 2014 Space Human Factors Engineering (SHFE) Standing Review Panel (from here on referred to as the SRP) participated in a WebEx/teleconference with members of the Space Human Factors and Habitability (SHFH) Element, representatives from the Human Research Program (HRP), the National Space Biomedical Research Institute (NSBRI), and NASA Headquarters on November 17, 2014 (list of participants is in Section XI of this report). The SRP reviewed the updated research plans for the Risk of Incompatible Vehicle/Habitat Design (HAB Risk) and the Risk of Performance Errors Due to Training Deficiencies (Train Risk). The SRP also received a status update on the Risk of Inadequate Critical Task Design (Task Risk), the Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI Risk), and the Risk of Inadequate Human-Computer Interaction (HCI Risk).

  12. Engineering Lessons Learned and Systems Engineering Applications

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Garcia, Danny; Vaughan, William W.

    2005-01-01

    Systems Engineering is fundamental to good engineering, which in turn depends on the integration and application of engineering lessons learned and technical standards. Thus, good Systems Engineering also depends on systems engineering lessons learned from within the aerospace industry being documented and applied. About ten percent of the engineering lessons learned documented in the NASA Lessons Learned Information System are directly related to Systems Engineering. A key issue associated with lessons learned datasets is the communication and incorporation of this information into engineering processes. Systems Engineering has been defined (EINIS-632) as "an interdisciplinary approach encompassing the entire technical effort to evolve and verify an integrated and life-cycle balanced set of system people, product, and process solutions that satisfy customer needs". Designing reliable space-based systems has always been a goal for NASA, and many painful lessons have been learned along the way. One of the continuing functions of a system engineer is to compile development and operations "lessons learned" documents and ensure their integration into future systems development activities. They can produce insights and information for risk identification identification and characterization. on a new project. Lessons learned files from previous projects are especially valuable in risk

  13. Human Factors Virtual Analysis Techniques for NASA's Space Launch System Ground Support using MSFC's Virtual Environments Lab (VEL)

    NASA Technical Reports Server (NTRS)

    Searcy, Brittani

    2017-01-01

    Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.

  14. Study on the Application of the Kent Index Method on the Risk Assessment of Disastrous Accidents in Subway Engineering

    PubMed Central

    Lu, Hao; Wang, Mingyang; Yang, Baohuai; Rong, Xiaoli

    2013-01-01

    With the development of subway engineering, according to uncertain factors and serious accidents involved in the construction of subways, implementing risk assessment is necessary and may bring a number of benefits for construction safety. The Kent index method extensively used in pipeline construction is improved to make risk assessment much more practical for the risk assessment of disastrous accidents in subway engineering. In the improved method, the indexes are divided into four categories, namely, basic, design, construction, and consequence indexes. In this study, a risk assessment model containing four kinds of indexes is provided. Three kinds of risk occurrence modes are listed. The probability index model which considers the relativity of the indexes is established according to the risk occurrence modes. The model provides the risk assessment process through the fault tree method and has been applied in the risk assessment of Nanjing subway's river-crossing tunnel construction. Based on the assessment results, the builders were informed of what risks should be noticed and what they should do to avoid the risks. The need for further research is discussed. Overall, this method may provide a tool for the builders, and improve the safety of the construction. PMID:23710136

  15. Failure Modes Effects and Criticality Analysis, an Underutilized Safety, Reliability, Project Management and Systems Engineering Tool

    NASA Astrophysics Data System (ADS)

    Mullin, Daniel Richard

    2013-09-01

    The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.

  16. [Simplified models for analysis of sources of risk and biomechanical overload in craft industries: practical application in confectioners, pasta and pizza makers].

    PubMed

    Placci, M; Cerbai, M

    2011-01-01

    The food industry is of great importance in Italy; it is second only to the engineering sector, involving about 440,000 workers. However, 90% of the food businesses have less than 10 employees and are exempt from legal obligation to provide a detailed Risk Assessment Document. The aim of the study was to identify the inconveniences and risks present in the workplaces analyzed with particular reference to biomechanical risk of the upper limbs and the lumbar spine. This preliminary study, carried out by using pre-mapping of the inconveniences and risks (5) and the "mini-checklist OCRA" (4), involved 15 small food businesses: ovens for baking bread, pastry shops, pizzerias and the production of "Piadina" (flat bread). Although undoubtedly with differences, confectioners, pasta makers, pizza makers and "piadinari" were exposed to similar risks. By analyzing the final graphs, action areas can be identified on which further risk analysis can be made. Exposure is mainly related to repetitive movements, manual handling of loads and a common occurrence is the risk of allergy to flour dust. There are real peaks in demand from customers, that inevitably increase work demands and consequently biomechanical overload. In future studies it will be interesting to investigate this aspect by studying the variations in work demand and the final exposure index of the working day.

  17. Impact of using a non-diabetes-specific risk calculator on eligibility for statin therapy in type 2 diabetes.

    PubMed

    Price, H C; Coleman, R L; Stevens, R J; Holman, R R

    2009-03-01

    The aim of this study was to investigate the impact of using a non-diabetes-specific cardiovascular disease (CVD) risk calculator to determine eligibility for statin therapy according to current UK National Institute for Health and Clinical Excellence (NICE) guidelines for those patients with type 2 diabetes who are at an increased risk of CVD (10 year risk >or=20%). The 10 year CVD risks were estimated using the UK Prospective Diabetes Study (UKPDS) Risk Engine and the Framingham equation for 4,025 patients enrolled in the Lipids in Diabetes Study who had established type 2 diabetes and LDL-cholesterol <4.1 mmol/l. The mean (SD) age of the patients was 60.7 (8.6) years, blood pressure 141/83 (17/10) mmHg and the total cholesterol:HDL-cholesterol ratio was 3.9 (1.0). The median (interquartile range) diabetes duration was 6 (3-11) years and the HbA(1c) level was 8.0% (7.2-9.0%). The cohort comprised 65% men, 91% whites, 4% Afro-Caribbeans, 5% Asian Indians and 15% current smokers. More patients were classified as being at high risk by the UKPDS Risk Engine (65%) than by the Framingham CVD equation (63%) (p < 0.0001). The Framingham CVD equation classified fewer men and people aged <50 years old as high risk (p < 0.0001). There was no difference between the UKPDS Risk Engine and Framingham classification of women at high risk (p = 0.834). These results suggest that the use of Framingham-derived rather than UKPDS Risk Engine-derived CVD risk estimates would deny about one in 25 patients statin therapy when applying current NICE guidelines. Thus, under these guidelines the choice of CVD risk calculator is important when assessing CVD risk in patients with type 2 diabetes, particularly for the identification of the relatively small proportion of younger people who require statin therapy.

  18. Electrocardiologic and related methods of non-invasive detection and risk stratification in myocardial ischemia: state of the art and perspectives

    PubMed Central

    Huebner, Thomas; Goernig, Matthias; Schuepbach, Michael; Sanz, Ernst; Pilgram, Roland; Seeck, Andrea; Voss, Andreas

    2010-01-01

    Background: Electrocardiographic methods still provide the bulk of cardiovascular diagnostics. Cardiac ischemia is associated with typical alterations in cardiac biosignals that have to be measured, analyzed by mathematical algorithms and allegorized for further clinical diagnostics. The fast growing fields of biomedical engineering and applied sciences are intensely focused on generating new approaches to cardiac biosignal analysis for diagnosis and risk stratification in myocardial ischemia. Objectives: To present and review the state of the art in and new approaches to electrocardiologic methods for non-invasive detection and risk stratification in coronary artery disease (CAD) and myocardial ischemia; secondarily, to explore the future perspectives of these methods. Methods: In follow-up to the Expert Discussion at the 2008 Workshop on "Biosignal Analysis" of the German Society of Biomedical Engineering in Potsdam, Germany, we comprehensively searched the pertinent literature and databases and compiled the results into this review. Then, we categorized the state-of-the-art methods and selected new approaches based on their applications in detection and risk stratification of myocardial ischemia. Finally, we compared the pros and cons of the methods and explored their future potentials for cardiology. Results: Resting ECG, particularly suited for detecting ST-elevation myocardial infarctions, and exercise ECG, for the diagnosis of stable CAD, are state-of-the-art methods. New exercise-free methods for detecting stable CAD include cardiogoniometry (CGM); methods for detecting acute coronary syndrome without ST elevation are Body Surface Potential Mapping, functional imaging and CGM. Heart rate variability and blood pressure variability analyses, microvolt T-wave alternans and signal-averaged ECG mainly serve in detecting and stratifying the risk for lethal arrythmias in patients with myocardial ischemia or previous myocardial infarctions. Telemedicine and ambient-assisted living support the electrocardiological monitoring of at-risk patients. Conclusions: There are many promising methods for the exercise-free, non-invasive detection of CAD and myocardial ischemia in the stable and acute phases. In the coming years, these new methods will help enhance state-of-the-art procedures in routine diagnostics. The future can expect that equally novel methods for risk stratification and telemedicine will transition into clinical routine. PMID:21063467

  19. NASA's J-2X Engine Builds on the Apollo Program for Lunar Return Missions

    NASA Technical Reports Server (NTRS)

    Snoddy, Jimmy R.

    2006-01-01

    In January 2006, NASA streamlined its U.S. Vision for Space Exploration hardware development approach for replacing the Space Shuttle after it is retired in 2010. The revised CLV upper stage will use the J-2X engine, a derivative of NASA s Apollo Program Saturn V s S-II and S-IVB main propulsion, which will also serve as the Earth Departure Stage (EDS) engine. This paper gives details of how the J- 2X engine effort mitigates risk by building on the Apollo Program and other lessons learned to deliver a human-rated engine that is on an aggressive development schedule, with first demonstration flight in 2010 and human test flights in 2012. It is well documented that propulsion is historically a high-risk area. NASA s risk reduction strategy for the J-2X engine design, development, test, and evaluation is to build upon heritage hardware and apply valuable experience gained from past development efforts. In addition, NASA and its industry partner, Rocketdyne, which originally built the J-2, have tapped into their extensive databases and are applying lessons conveyed firsthand by Apollo-era veterans of America s first round of Moon missions in the 1960s and 1970s. NASA s development approach for the J-2X engine includes early requirements definition and management; designing-in lessons learned from the 5-2 heritage programs; initiating long-lead procurement items before Preliminary Desi& Review; incorporating design features for anticipated EDS requirements; identifying facilities for sea-level and altitude testing; and starting ground support equipment and logistics planning at an early stage. Other risk reduction strategies include utilizing a proven gas generator cycle with recent development experience; utilizing existing turbomachinery ; applying current and recent main combustion chamber (Integrated Powerhead Demonstrator) and channel wall nozzle (COBRA) advances; and performing rigorous development, qualification, and certification testing of the engine system, with a philosophy of "test what you fly, and fly what you test". These and other active risk management strategies are in place to deliver the J-2X engine for LEO and lunar return missions as outlined in the U.S. Vision for Space Exploration.

  20. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  1. The 7.5K lbf thrust engine preliminary design for Orbit Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Hayden, Warren R.; Sabiers, Ralph; Schneider, Judy

    1994-01-01

    This document summarizes the preliminary design of the Aerojet version of the Orbit Transfer Vehicle main engine. The concept of a 7500 lbf thrust LO2/GH2 engine using the dual expander cycle for optimum efficiency is validated through power balance and thermal calculations. The engine is capable of 10:1 throttling from a nominal 2000 psia to a 200 psia chamber pressure. Reservations are detailed on the feasibility of a tank head start, but the design incorporates low speed turbopumps to mitigate the problem. The mechanically separate high speed turbopumps use hydrostatic bearings to meet engine life requirements, and operate at sub-critical speed for better throttling ability. All components were successfully packaged in the restricted envelope set by the clearances for the extendible/retractable nozzle. Gimbal design uses an innovative primary and engine out gimbal system to meet the +/- 20 deg gimbal requirement. The hydrogen regenerator and LOX/GH2 heat exchanger uses the Aerojet platelet structures approach for a highly compact component design. The extendible/retractable nozzle assembly uses an electric motor driven jack-screw design and a one segment carbon-carbon or silicide coated columbium nozzle with an area ratio, when extended, of 1430:1. A reliability analysis and risk assessment concludes the report.

  2. Evaluating Risk Awareness in Undergraduate Students Studying Mechanical Engineering

    ERIC Educational Resources Information Center

    Langdon, G. S.; Balchin, K.; Mufamadi, P.

    2010-01-01

    This paper examines the development of risk awareness among undergraduate students studying mechanical engineering at a South African university. A questionnaire developed at the University of Liverpool was modified and used on students from the first, second and third year cohorts to assess their awareness in the areas of professional…

  3. 78 FR 45169 - GENECTIVE SA; Availability of Plant Pest Risk Assessment, Environmental Assessment, Preliminary...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-26

    ... Engineered for Herbicide Resistance AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice... herbicide glyphosate. We are also making available for public review our plant pest risk assessment... VCO-01981-5, which has been genetically engineered for resistance to the herbicide glyphosate. The...

  4. Engine non-containment: UK risk assessment methods

    NASA Technical Reports Server (NTRS)

    Wallin, J. C.

    1977-01-01

    More realistic guideline data must be developed for use in aircraft design in order to comply with recent changes in British civil airworthiness requirements. Unrealistically pessimistic results were obtained when the methodology developed during the Concorde SST certification program was extended to assess catastrophic risks resulting from uncontained engine rotors.

  5. Optimizing spacecraft design - optimization engine development : progress and plans

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim

    2003-01-01

    At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.

  6. Orbiter Window Hypervelocity Impact Strength Evaluation

    NASA Technical Reports Server (NTRS)

    Estes, Lynda R.

    2011-01-01

    When the Space Shuttle Orbiter incurs damage on its windowpane during flight from particles traveling at hypervelocity speeds, it produces a distinctive damage that reduces the overall strength of the pane. This damage has the potential to increase the risk associated with a safe return to Earth. Engineers at Boeing and NASA/JSC are called to Mission Control to evaluate the damage and provide an assessment on the risk to the crew. Historically, damages like these were categorized as "accepted risk" associated with manned spaceflight, and as long as the glass was intact, engineers gave a "go ahead" for entry for the Orbiter. Since the Columbia accident, managers have given more scrutiny to these assessments, and this has caused the Orbiter window engineers to capitalize on new methods of assessments for these damages. This presentation will describe the original methodology that was used to asses the damages, and introduce a philosophy new to the Shuttle program for assessing structural damage, reliability/risk-based engineering. The presentation will also present a new, recently adopted method for assessing the damage and providing management with a reasonable assessment on the realities of the risk to the crew and vehicle for return.

  7. SSHAC Level 1 Probabilistic Seismic Hazard Analysis for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payne, Suzette; Coppersmith, Ryan; Coppersmith, Kevin

    A Probabilistic Seismic Hazard Analysis (PSHA) was completed for the Materials and Fuels Complex (MFC), Naval Reactors Facility (NRF), and the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) (Figure 1-1). The PSHA followed the approaches and procedures appropriate for a Study Level 1 provided in the guidance advanced by the Senior Seismic Hazard Analysis Committee (SSHAC) in U.S. Nuclear Regulatory Commission (NRC) NUREG/CR-6372 and NUREG-2117 (NRC, 1997; 2012a). The SSHAC Level 1 PSHAs for MFC and ATR were conducted as part of the Seismic Risk Assessment (SRA) project (INL Project number 31287) to develop and apply a new-riskmore » informed methodology, respectively. The SSHAC Level 1 PSHA was conducted for NRF to provide guidance on the potential use of a design margin above rock hazard levels. The SRA project is developing a new risk-informed methodology that will provide a systematic approach for evaluating the need for an update of an existing PSHA. The new methodology proposes criteria to be employed at specific analysis, decision, or comparison points in its evaluation process. The first four of seven criteria address changes in inputs and results of the PSHA and are given in U.S. Department of Energy (DOE) Standard, DOE-STD-1020-2012 (DOE, 2012a) and American National Standards Institute/American Nuclear Society (ANSI/ANS) 2.29 (ANS, 2008a). The last three criteria address evaluation of quantitative hazard and risk-focused information of an existing nuclear facility. The seven criteria and decision points are applied to Seismic Design Category (SDC) 3, 4, and 5, which are defined in American Society of Civil Engineers/Structural Engineers Institute (ASCE/SEI) 43-05 (ASCE, 2005). The application of the criteria and decision points could lead to an update or could determine that such update is not necessary.« less

  8. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  9. Maternal occupation and the risk of major birth defects: A follow-up analysis from the National Birth Defects Prevention Study

    PubMed Central

    Lin, Shao; Herdt-Losavio, Michele L.; Chapman, Bonnie R.; Munsie, Jean-Pierre; Olshan, Andrew F.; Druschel, Charlotte M.

    2013-01-01

    This study further examined the association between selected maternal occupations and a variety of birth defects identified from prior analysis and explored the effect of work hours and number of jobs held and potential interaction between folic acid and occupation. Data from a population-based, multi-center case-control study was used. Analyses included 45 major defects and specific sub-occupations under five occupational groups: healthcare workers, cleaners, scientists, teachers and personal service workers. Both logistic regression and Bayesian models (to minimize type-1 errors) were used, adjusted for potential confounders. Effect modification by folic acid was also assessed. More than any other occupation, nine different defects were positively associated with maids or janitors [odds ratio (OR) range: 1.72-3.99]. Positive associations were also seen between the following maternal occupations and defects in their children (OR range: 1.35-3.48): chemists/conotruncal heart and neural tube defects (NTDs), engineers/conotruncal defects, preschool teachers/cataracts and cleft lip with/without cleft palate (CL/P), entertainers/athletes/gastroschisis, and nurses/hydrocephalus and left ventricular outflow tract heart defects. Non-preschool teachers had significantly lower odds of oral clefts and gastroschisis in their offspring (OR range: 0.53-0.76). There was a suggestion that maternal folic acid use modified the effects with occupations including lowering the risk of NTDs and CL/P. No consistent patterns were found between maternal work hours or multiple jobs by occupation and the risk of birth defects. Overall, mothers working as maids, janitors, biologists, chemists, engineers, nurses, entertainers, child care workers and preschool teachers had increased risks of several malformations and non-preschool teachers had a lower risk of some defects. Maternal folic acid use reduced the odds of NTDs and CL/P among those with certain occupations. This hypothesis-generating study will provide clues for future studies with better exposure data. PMID:22695106

  10. Dynamic Positioning System (DPS) Risk Analysis Using Probabilistic Risk Assessment (PRA)

    NASA Technical Reports Server (NTRS)

    Thigpen, Eric B.; Boyer, Roger L.; Stewart, Michael A.; Fougere, Pete

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Safety & Mission Assurance (S&MA) directorate at the Johnson Space Center (JSC) has applied its knowledge and experience with Probabilistic Risk Assessment (PRA) to projects in industries ranging from spacecraft to nuclear power plants. PRA is a comprehensive and structured process for analyzing risk in complex engineered systems and/or processes. The PRA process enables the user to identify potential risk contributors such as, hardware and software failure, human error, and external events. Recent developments in the oil and gas industry have presented opportunities for NASA to lend their PRA expertise to both ongoing and developmental projects within the industry. This paper provides an overview of the PRA process and demonstrates how this process was applied in estimating the probability that a Mobile Offshore Drilling Unit (MODU) operating in the Gulf of Mexico and equipped with a generically configured Dynamic Positioning System (DPS) loses location and needs to initiate an emergency disconnect. The PRA described in this paper is intended to be generic such that the vessel meets the general requirements of an International Maritime Organization (IMO) Maritime Safety Committee (MSC)/Circ. 645 Class 3 dynamically positioned vessel. The results of this analysis are not intended to be applied to any specific drilling vessel, although provisions were made to allow the analysis to be configured to a specific vessel if required.

  11. Risk and responsibility: a complex and evolving relationship.

    PubMed

    Kermisch, Céline

    2012-03-01

    This paper analyses the nature of the relationship between risk and responsibility. Since neither the concept of risk nor the concept of responsibility has an unequivocal definition, it is obvious that there is no single interpretation of their relationship. After introducing the different meanings of responsibility used in this paper, we analyse four conceptions of risk. This allows us to make their link with responsibility explicit and to determine if a shift in the connection between risk and responsibility can be outlined. (1) In the engineer's paradigm, the quantitative conception of risk does not include any concept of responsibility. Their relationship is indirect, the locus of responsibility being risk management. (2) In Mary Douglas' cultural theory, risks are constructed through the responsibilities they engage. (3) Rayner and (4) Wolff go further by integrating forms of responsibility in the definition of risk itself. Analysis of these four frameworks shows that the concepts of risk and responsibility are increasingly intertwined. This tendency is reinforced by increasing public awareness and a call for the integration of a moral dimension in risk management. Therefore, we suggest that a form of virtue-responsibility should also be integrated in the concept of risk. © Springer Science+Business Media B.V. 2010

  12. Joint Conference on Marine Safety and Environment/Ship Production (1st), held 1-5 June 1992

    DTIC Science & Technology

    1992-06-05

    have to face fresh challenges in the future. But as long as we regard these challenges as opportunities, we will remain masters of our own destiny ...has recently published for discussion an “ Embryo Code of Practice” (1991) entitled “Engineers and Risk Issues”. Thii draft Code is intended to...Department of Energy5 Her Majesty’s Stationery Office. Cm.1310. The Engineering Counci1 (1991) Engineers and Risk Issues : An Embryo Code of Practice. London

  13. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 1 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-04-09

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of October through December 2011.« less

  14. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-06-28

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of January through March 2012.« less

  15. Propulsion System Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Tai, Jimmy C. M.; McClure, Erin K.; Mavris, Dimitri N.; Burg, Cecile

    2002-01-01

    The Aerospace Systems Design Laboratory at the School of Aerospace Engineering in Georgia Institute of Technology has developed a core competency that enables propulsion technology managers to make technology investment decisions substantiated by propulsion and airframe technology system studies. This method assists the designer/manager in selecting appropriate technology concepts while accounting for the presence of risk and uncertainty as well as interactions between disciplines. This capability is incorporated into a single design simulation system that is described in this paper. This propulsion system design environment is created with a commercially available software called iSIGHT, which is a generic computational framework, and with analysis programs for engine cycle, engine flowpath, mission, and economic analyses. iSIGHT is used to integrate these analysis tools within a single computer platform and facilitate information transfer amongst the various codes. The resulting modeling and simulation (M&S) environment in conjunction with the response surface method provides the designer/decision-maker an analytical means to examine the entire design space from either a subsystem and/or system perspective. The results of this paper will enable managers to analytically play what-if games to gain insight in to the benefits (and/or degradation) of changing engine cycle design parameters. Furthermore, the propulsion design space will be explored probabilistically to show the feasibility and viability of the propulsion system integrated with a vehicle.

  16. Applying Qualitative Hazard Analysis to Support Quantitative Safety Analysis for Proposed Reduced Wake Separation Conops

    NASA Technical Reports Server (NTRS)

    Shortle, John F.; Allocco, Michael

    2005-01-01

    This paper describes a scenario-driven hazard analysis process to identify, eliminate, and control safety-related risks. Within this process, we develop selective criteria to determine the applicability of applying engineering modeling to hypothesized hazard scenarios. This provides a basis for evaluating and prioritizing the scenarios as candidates for further quantitative analysis. We have applied this methodology to proposed concepts of operations for reduced wake separation for closely spaced parallel runways. For arrivals, the process identified 43 core hazard scenarios. Of these, we classified 12 as appropriate for further quantitative modeling, 24 that should be mitigated through controls, recommendations, and / or procedures (that is, scenarios not appropriate for quantitative modeling), and 7 that have the lowest priority for further analysis.

  17. 77 FR 41366 - Syngenta Biotechnology, Inc.; Availability of Petition, Plant Pest Risk Assessment, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-13

    ... engineered organisms and products. We are soliciting comments on whether this genetically engineered corn is... pests. Such genetically engineered organisms and products are considered ``regulated articles.'' The... Assessment for Determination of Nonregulated Status of Corn Genetically Engineered for Insect Resistance...

  18. Classifying Nanomaterial Risks Using Multi-Criteria Decision Analysis

    NASA Astrophysics Data System (ADS)

    Linkov, I.; Steevens, J.; Chappell, M.; Tervonen, T.; Figueira, J. R.; Merad, M.

    There is rapidly growing interest by regulatory agencies and stakeholders in the potential toxicity and other risks associated with nanomaterials throughout the different stages of the product life cycle (e.g., development, production, use and disposal). Risk assessment methods and tools developed and applied to chemical and biological material may not be readily adaptable for nanomaterials because of the current uncertainty in identifying the relevant physico-chemical and biological properties that adequately describe the materials. Such uncertainty is further driven by the substantial variations in the properties of the original material because of the variable manufacturing processes employed in nanomaterial production. To guide scientists and engineers in nanomaterial research and application as well as promote the safe use/handling of these materials, we propose a decision support system for classifying nanomaterials into different risk categories. The classification system is based on a set of performance metrics that measure both the toxicity and physico-chemical characteristics of the original materials, as well as the expected environmental impacts through the product life cycle. The stochastic multicriteria acceptability analysis (SMAA-TRI), a formal decision analysis method, was used as the foundation for this task. This method allowed us to cluster various nanomaterials in different risk categories based on our current knowledge of nanomaterial's physico-chemical characteristics, variation in produced material, and best professional judgement. SMAA-TRI uses Monte Carlo simulations to explore all feasible values for weights, criteria measurements, and other model parameters to assess the robustness of nanomaterial grouping for risk management purposes.1,2

  19. Development of the performance confirmation program at YUCCA mountain, nevada

    USGS Publications Warehouse

    LeCain, G.D.; Barr, D.; Weaver, D.; Snell, R.; Goodin, S.W.; Hansen, F.D.

    2006-01-01

    The Yucca Mountain Performance Confirmation program consists of tests, monitoring activities, experiments, and analyses to evaluate the adequacy of assumptions, data, and analyses that form the basis of the conceptual and numerical models of flow and transport associated with a proposed radioactive waste repository at Yucca Mountain, Nevada. The Performance Confirmation program uses an eight-stage risk-informed, performance-based approach. Selection of the Performance Confirmation activities for inclusion in the Performance Confirmation program was done using a risk-informed performance-based decision analysis. The result of this analysis was a Performance Confirmation base portfolio that consists of 20 activities. The 20 Performance Confirmation activities include geologic, hydrologie, and construction/engineering testing. Some of the activities began during site characterization, and others will begin during construction, or post emplacement, and continue until repository closure.

  20. Security engineering: systems engineering of security through the adaptation and application of risk management

    NASA Technical Reports Server (NTRS)

    Gilliam, David P.; Feather, Martin S.

    2004-01-01

    Information Technology (IT) Security Risk Management is a critical task in the organization, which must protect its resources and data against the loss of confidentiality, integrity, and availability. As systems become more complex and diverse, and more vulnerabilities are discovered while attacks from intrusions and malicious content increase, it is becoming increasingly difficult to manage IT security. This paper describes an approach to address IT security risk through risk management and mitigation in both the institution and in the project life cycle.

  1. Analyzing system safety in lithium-ion grid energy storage

    NASA Astrophysics Data System (ADS)

    Rosewater, David; Williams, Adam

    2015-12-01

    As grid energy storage systems become more complex, it grows more difficult to design them for safe operation. This paper first reviews the properties of lithium-ion batteries that can produce hazards in grid scale systems. Then the conventional safety engineering technique Probabilistic Risk Assessment (PRA) is reviewed to identify its limitations in complex systems. To address this gap, new research is presented on the application of Systems-Theoretic Process Analysis (STPA) to a lithium-ion battery based grid energy storage system. STPA is anticipated to fill the gaps recognized in PRA for designing complex systems and hence be more effective or less costly to use during safety engineering. It was observed that STPA is able to capture causal scenarios for accidents not identified using PRA. Additionally, STPA enabled a more rational assessment of uncertainty (all that is not known) thereby promoting a healthy skepticism of design assumptions. We conclude that STPA may indeed be more cost effective than PRA for safety engineering in lithium-ion battery systems. However, further research is needed to determine if this approach actually reduces safety engineering costs in development, or improves industry safety standards.

  2. Nanotoxicology and nanomedicine: making development decisions in an evolving governance environment

    NASA Astrophysics Data System (ADS)

    Rycroft, Taylor; Trump, Benjamin; Poinsatte-Jones, Kelsey; Linkov, Igor

    2018-02-01

    The fields of nanomedicine, risk analysis, and decision science have evolved considerably in the past decade, providing developers of nano-enabled therapies and diagnostic tools with more complete information than ever before and shifting a fundamental requisite of the nanomedical community from the need for more information about nanomaterials to the need for a streamlined method of integrating the abundance of nano-specific information into higher-certainty product design decisions. The crucial question facing nanomedicine developers that must select the optimal nanotechnology in a given situation has shifted from "how do we estimate nanomaterial risk in the absence of good risk data?" to "how can we derive a holistic characterization of the risks and benefits that a given nanomaterial may pose within a specific nanomedical application?" Many decision support frameworks have been proposed to assist with this inquiry; however, those based in multicriteria decision analysis have proven to be most adaptive in the rapidly evolving field of nanomedicine—from the early stages of the field when conditions of significant uncertainty and incomplete information dominated, to today when nanotoxicology and nano-environmental health and safety information is abundant but foundational paradigms such as chemical risk assessment, risk governance, life cycle assessment, safety-by-design, and stakeholder engagement are undergoing substantial reformation in an effort to address the needs of emerging technologies. In this paper, we reflect upon 10 years of developments in nanomedical engineering and demonstrate how the rich knowledgebase of nano-focused toxicological and risk assessment information developed over the last decade enhances the capability of multicriteria decision analysis approaches and underscores the need to continue the transition from traditional risk assessment towards risk-based decision-making and alternatives-based governance for emerging technologies.

  3. A surety engineering framework to reduce cognitive systems risks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caudell, Thomas P.; Peercy, David Eugene; Caldera, Eva O.

    Cognitive science research investigates the advancement of human cognition and neuroscience capabilities. Addressing risks associated with these advancements can counter potential program failures, legal and ethical issues, constraints to scientific research, and product vulnerabilities. Survey results, focus group discussions, cognitive science experts, and surety researchers concur technical risks exist that could impact cognitive science research in areas such as medicine, privacy, human enhancement, law and policy, military applications, and national security (SAND2006-6895). This SAND report documents a surety engineering framework and a process for identifying cognitive system technical, ethical, legal and societal risks and applying appropriate surety methods to reducemore » such risks. The framework consists of several models: Specification, Design, Evaluation, Risk, and Maturity. Two detailed case studies are included to illustrate the use of the process and framework. Several Appendices provide detailed information on existing cognitive system architectures; ethical, legal, and societal risk research; surety methods and technologies; and educing information research with a case study vignette. The process and framework provide a model for how cognitive systems research and full-scale product development can apply surety engineering to reduce perceived and actual risks.« less

  4. 14 CFR 23.903 - Engines.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... compartment) of any system that can affect an engine (other than a fuel tank if only one fuel tank is... stopping (piston engine). (1) The design of the installation must be such that risk of fire or mechanical...

  5. Design optimization and uncertainty quantification for aeromechanics forced response of a turbomachinery blade

    NASA Astrophysics Data System (ADS)

    Modgil, Girish A.

    Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.

  6. The J-2X Fuel Turbopump - Design, Development, and Test

    NASA Technical Reports Server (NTRS)

    Tellier, James G.; Hawkins, Lakiesha V.; Shinguchi, Brian H.; Marsh, Matthew W.

    2011-01-01

    Pratt and Whitney Rocketdyne (PWR), a NASA subcontractor, is executing the design, development, test, and evaluation (DDT&E) of a liquid oxygen, liquid hydrogen two hundred ninety four thousand pound thrust rocket engine initially intended for the Upper Stage (US) and Earth Departure Stage (EDS) of the Constellation Program Ares-I Crew Launch Vehicle (CLV). A key element of the design approach was to base the new J-2X engine on the heritage J-2S engine with the intent of uprating the engine and incorporating SSME and RS-68 lessons learned. The J-2S engine was a design upgrade of the flight proven J-2 configuration used to put American astronauts on the moon. The J-2S Fuel Turbopump (FTP) was the first Rocketdyne-designed liquid hydrogen centrifugal pump and provided many of the early lessons learned for the Space Shuttle Main Engine High Pressure Fuel Turbopumps. This paper will discuss the design trades and analyses performed for the current J-2X FTP to increase turbine life; increase structural margins, facilitate component fabrication; expedite turbopump assembly; and increase rotordynamic stability margins. Risk mitigation tests including inducer water tests, whirligig turbine blade tests, turbine air rig tests, and workhorse gas generator tests characterized operating environments, drove design modifications, or identified performance impact. Engineering design, fabrication, analysis, and assembly activities support FTP readiness for the first J-2X engine test scheduled for July 2011.

  7. WE-B-BRC-02: Risk Analysis and Incident Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraass, B.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  8. NASA System Safety Handbook. Volume 1; System Safety Framework and Concepts for Implementation

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Benjamin, Allan; Everett, Christopher; Smith, Curtis; Stamatelatos, Michael; Youngblood, Robert

    2011-01-01

    System safety assessment is defined in NPR 8715.3C, NASA General Safety Program Requirements as a disciplined, systematic approach to the analysis of risks resulting from hazards that can affect humans, the environment, and mission assets. Achievement of the highest practicable degree of system safety is one of NASA's highest priorities. Traditionally, system safety assessment at NASA and elsewhere has focused on the application of a set of safety analysis tools to identify safety risks and formulate effective controls.1 Familiar tools used for this purpose include various forms of hazard analyses, failure modes and effects analyses, and probabilistic safety assessment (commonly also referred to as probabilistic risk assessment (PRA)). In the past, it has been assumed that to show that a system is safe, it is sufficient to provide assurance that the process for identifying the hazards has been as comprehensive as possible and that each identified hazard has one or more associated controls. The NASA Aerospace Safety Advisory Panel (ASAP) has made several statements in its annual reports supporting a more holistic approach. In 2006, it recommended that "... a comprehensive risk assessment, communication and acceptance process be implemented to ensure that overall launch risk is considered in an integrated and consistent manner." In 2009, it advocated for "... a process for using a risk-informed design approach to produce a design that is optimally and sufficiently safe." As a rationale for the latter advocacy, it stated that "... the ASAP applauds switching to a performance-based approach because it emphasizes early risk identification to guide designs, thus enabling creative design approaches that might be more efficient, safer, or both." For purposes of this preface, it is worth mentioning three areas where the handbook emphasizes a more holistic type of thinking. First, the handbook takes the position that it is important to not just focus on risk on an individual basis but to consider measures of aggregate safety risk and to ensure wherever possible that there be quantitative measures for evaluating how effective the controls are in reducing these aggregate risks. The term aggregate risk, when used in this handbook, refers to the accumulation of risks from individual scenarios that lead to a shortfall in safety performance at a high level: e.g., an excessively high probability of loss of crew, loss of mission, planetary contamination, etc. Without aggregated quantitative measures such as these, it is not reasonable to expect that safety has been optimized with respect to other technical and programmatic objectives. At the same time, it is fully recognized that not all sources of risk are amenable to precise quantitative analysis and that the use of qualitative approaches and bounding estimates may be appropriate for those risk sources. Second, the handbook stresses the necessity of developing confidence that the controls derived for the purpose of achieving system safety not only handle risks that have been identified and properly characterized but also provide a general, more holistic means for protecting against unidentified or uncharacterized risks. For example, while it is not possible to be assured that all credible causes of risk have been identified, there are defenses that can provide protection against broad categories of risks and thereby increase the chances that individual causes are contained. Third, the handbook strives at all times to treat uncertainties as an integral aspect of risk and as a part of making decisions. The term "uncertainty" here does not refer to an actuarial type of data analysis, but rather to a characterization of our state of knowledge regarding results from logical and physical models that approximate reality. Uncertainty analysis finds how the output parameters of the models are related to plausible variations in the input parameters and in the modeling assumptions. The evaluation of unrtainties represents a method of probabilistic thinking wherein the analyst and decision makers recognize possible outcomes other than the outcome perceived to be "most likely." Without this type of analysis, it is not possible to determine the worth of an analysis product as a basis for making decisions related to safety and mission success. In line with these considerations the handbook does not take a hazard-analysis-centric approach to system safety. Hazard analysis remains a useful tool to facilitate brainstorming but does not substitute for a more holistic approach geared to a comprehensive identification and understanding of individual risk issues and their contributions to aggregate safety risks. The handbook strives to emphasize the importance of identifying the most critical scenarios that contribute to the risk of not meeting the agreed-upon safety objectives and requirements using all appropriate tools (including but not limited to hazard analysis). Thereafter, emphasis shifts to identifying the risk drivers that cause these scenarios to be critical and ensuring that there are controls directed toward preventing or mitigating the risk drivers. To address these and other areas, the handbook advocates a proactive, analytic-deliberative, risk-informed approach to system safety, enabling the integration of system safety activities with systems engineering and risk management processes. It emphasizes how one can systematically provide the necessary evidence to substantiate the claim that a system is safe to within an acceptable risk tolerance, and that safety has been achieved in a cost-effective manner. The methodology discussed in this handbook is part of a systems engineering process and is intended to be integral to the system safety practices being conducted by the NASA safety and mission assurance and systems engineering organizations. The handbook posits that to conclude that a system is adequately safe, it is necessary to consider a set of safety claims that derive from the safety objectives of the organization. The safety claims are developed from a hierarchy of safety objectives and are therefore hierarchical themselves. Assurance that all the claims are true within acceptable risk tolerance limits implies that all of the safety objectives have been satisfied, and therefore that the system is safe. The acceptable risk tolerance limits are provided by the authority who must make the decision whether or not to proceed to the next step in the life cycle. These tolerances are therefore referred to as the decision maker's risk tolerances. In general, the safety claims address two fundamental facets of safety: 1) whether required safety thresholds or goals have been achieved, and 2) whether the safety risk is as low as possible within reasonable impacts on cost, schedule, and performance. The latter facet includes consideration of controls that are collective in nature (i.e., apply generically to broad categories of risks) and thereby provide protection against unidentified or uncharacterized risks.

  9. Two unconventional risk factors for major adverse cardiovascular events in subjects with sexual dysfunction: low education and reported partner's hypoactive sexual desire in comparison with conventional risk factors.

    PubMed

    Rastrelli, Giulia; Corona, Giovanni; Fisher, Alessandra D; Silverii, Antonio; Mannucci, Edoardo; Maggi, Mario

    2012-12-01

    The classification of subjects as low or high cardiovascular (CV) risk is usually performed by risk engines, based upon multivariate prediction algorithms. However, their accuracy in predicting major adverse CV events (MACEs) is lower in high-risk populations as they take into account only conventional risk factors. To evaluate the accuracy of Progetto Cuore risk engine in predicting MACE in subjects with erectile dysfunction (ED) and to test the role of unconventional CV risk factors, specifically identified for ED. A consecutive series of 1,233 men (mean age 53.33 ± 9.08 years) attending our outpatient clinic for sexual dysfunction was longitudinally studied for a mean period of 4.4 ± 2.6 years. Several clinical, biochemical, and instrumental parameters were evaluated. Subjects were classified as high or low risk, according to previously reported ED-specific risk factors. In the overall population, Progetto Cuore-predicted population survival was not significantly different from the observed one (P = 0.545). Accordingly, receiver operating characteristic (ROC) analysis shows that Progetto Cuore has an accuracy of 0.697 ± 0.037 (P < 0.001) in predicting MACE. Considering subjects at high risk according to ED-specific risk factors, the observed incidence of MACE was significantly higher than the expected for both low educated and patients reporting partner's hypoactive sexual desire (HSD, both <0.05), but not for other described factors. The area under ROC curves of Progetto Cuore for MACE in subjects with low education and reported partner's HSD were 0.659 ± 0.053 (P = 0.008) and 0.550 ± 0.076 (P = 0.570), respectively. Overall, Progetto Cuore is a proper instrument for evaluating CV risk in ED subjects. However, in ED, other factors such as low education and partner's HSD concur to risk profile. At variance with low education, Progetto Cuore is not accurate enough to predict MACE in subjects with partner's HSD, suggesting that the latter effect is not mediated by conventional risk factors included in the algorithm. © 2012 International Society for Sexual Medicine.

  10. Propulsion Risk Reduction Activities for Non-Toxic Cryogenic Propulsion

    NASA Technical Reports Server (NTRS)

    Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth

    2010-01-01

    The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for non-toxic or "green" propellants. The PCAD project focuses on the development of non-toxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of non-toxic propellants for space missions. Implementation of non-toxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that non-toxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.

  11. Propulsion Risk Reduction Activities for Nontoxic Cryogenic Propulsion

    NASA Technical Reports Server (NTRS)

    Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth L.

    2010-01-01

    The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for nontoxic or "green" propellants. The PCAD project focuses on the development of nontoxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of nontoxic propellants for space missions. Implementation of nontoxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that nontoxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.

  12. Diesel engine exhaust and lung cancer: an unproven association.

    PubMed Central

    Muscat, J E; Wynder, E L

    1995-01-01

    The risk of lung cancer associated with diesel exhaust has been calculated from 14 case-control or cohort studies. We evaluated the findings from these studies to determine whether there is sufficient evidence to implicate diesel exhaust as a human lung carcinogen. Four studies found increased risks associated with long-term exposure, although two of the four studies were based on the same cohort of railroad workers. Six studies were inconclusive due to missing information on smoking habits, internal inconsistencies, or inadequate characterization of diesel exposure. Four studies found no statistically significant associations. It can be concluded that short-term exposure to diesel engine exhaust (< 20 years) does not have a causative role in human lung cancer. There is statistical but not causal evidence that long-term exposure to diesel exhaust (> 20 years) increases the risk of lung cancer for locomotive engineers, brakemen, and diesel engine mechanics. There is inconsistent evidence on the effects of long-term exposure to diesel exhaust in the trucking industry. There is no evidence for a joint effect of diesel exhaust and cigarette smoking on lung cancer risk. Using common criteria for determining causal associations, the epidemiologic evidence is insufficient to establish diesel engine exhaust as a human lung carcinogen. Images p812-a PMID:7498093

  13. Incorporating Risk Assessment and Inherently Safer Design Practices into Chemical Engineering Education

    ERIC Educational Resources Information Center

    Seay, Jeffrey R.; Eden, Mario R.

    2008-01-01

    This paper introduces, via case study example, the benefit of including risk assessment methodology and inherently safer design practices into the curriculum for chemical engineering students. This work illustrates how these tools can be applied during the earliest stages of conceptual process design. The impacts of decisions made during…

  14. Freshman Engineering Students At-Risk of Non-Matriculation: Self-Efficacy for Academic Learning

    ERIC Educational Resources Information Center

    Ernst, Jeremy V.; Bowen, Bradley D.; Williams, Thomas O.

    2016-01-01

    Students identified as at-risk of non-academic continuation have a propensity toward lower academic self-efficacy than their peers (Lent, 2005). Within engineering, self-efficacy and confidence are major markers of university continuation and success (Lourens, 2014 Raelin, et al., 2014). This study explored academic learning self-efficacy specific…

  15. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  16. [Prospects of systemic radioecology in solving innovative tasks of nuclear power engineering].

    PubMed

    Spiridonov, S I

    2014-01-01

    A need of systemic radioecological studies in the strategy developed by the atomic industry in Russia in the XXI century has been justified. The priorities in the radioecology of nuclear power engineering of natural safety associated with the development of the radiation-migration equivalence concept, comparative evaluation of innovative nuclear technologies and forecasting methods of various emergencies have been identified. Also described is an algorithm for the integrated solution of these tasks that includes elaboration of methodological approaches, methods and software allowing dose burdens to humans and biota to be estimated. The rationale of using radioecological risks for the analysis of uncertainties in the environmental contamination impacts,at different stages of the existing and innovative nuclear fuel cycles is shown.

  17. Software Risk Identification for Interplanetary Probes

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert J.; Papadopoulos, Periklis E.

    2005-01-01

    The need for a systematic and effective software risk identification methodology is critical for interplanetary probes that are using increasingly complex and critical software. Several probe failures are examined that suggest more attention and resources need to be dedicated to identifying software risks. The direct causes of these failures can often be traced to systemic problems in all phases of the software engineering process. These failures have lead to the development of a practical methodology to identify risks for interplanetary probes. The proposed methodology is based upon the tailoring of the Software Engineering Institute's (SEI) method of taxonomy-based risk identification. The use of this methodology will ensure a more consistent and complete identification of software risks in these probes.

  18. Perception of risk from automobile safety defects.

    PubMed

    Slovic, P; MacGregor, D; Kraus, N N

    1987-10-01

    Descriptions of safety engineering defects of the kind that compel automobile manufacturers to initiate a recall campaign were evaluated by individuals on a set of risk characteristic scales that included overall vehicle riskiness, manufacturer's ability to anticipate the defect, importance for vehicle operation, severity of consequences and likelihood of compliance with a recall notice. A factor analysis of the risk characteristics indicated that judgments could be summarized in terms of two composite scales, one representing the uncontrollability of the damage the safety defect might cause and the other representing the foreseeability of the defect by the manufacturer. Motor vehicle defects were found to be highly diverse in terms of the perceived qualities of their risks. Location of individual defects within the factor space was closely associated with perceived riskiness, perceived likelihood of purchasing another car from the same manufacturer, perceived likelihood of compliance with a recall notice, and actual compliance rates.

  19. Electronic cigarettes: incorporating human factors engineering into risk assessments

    PubMed Central

    Yang, Ling; Rudy, Susan F; Cheng, James M; Durmowicz, Elizabeth L

    2014-01-01

    Objective A systematic review was conducted to evaluate the impact of human factors (HF) on the risks associated with electronic cigarettes (e-cigarettes) and to identify research gaps. HF is the evaluation of human interactions with products and includes the analysis of user, environment and product complexity. Consideration of HF may mitigate known and potential hazards from the use and misuse of a consumer product, including e-cigarettes. Methods Five databases were searched through January 2014 and publications relevant to HF were incorporated. Voluntary adverse event (AE) reports submitted to the US Food and Drug Administration (FDA) and the package labelling of 12 e-cigarette products were analysed. Results No studies specifically addressing the impact of HF on e-cigarette use risks were identified. Most e-cigarette users are smokers, but data on the user population are inconsistent. No articles focused specifically on e-cigarette use environments, storage conditions, product operational requirements, product complexities, user errors or misuse. Twelve published studies analysed e-cigarette labelling and concluded that labelling was inadequate or misleading. FDA labelling analysis revealed similar concerns described in the literature. AE reports related to design concerns are increasing and fatalities related to accidental exposure and misuse have occurred; however, no publications evaluating the relationship between AEs and HF were identified. Conclusions The HF impacting e-cigarette use and related hazards are inadequately characterised. Thorough analyses of user–product–environment interfaces, product complexities and AEs associated with typical and atypical use are needed to better incorporate HF engineering principles to inform and potentially reduce or mitigate the emerging hazards associated with e-cigarette products. PMID:24732164

  20. Asilomar moments: formative framings in recombinant DNA and solar climate engineering research.

    PubMed

    Schäfer, Stefan; Low, Sean

    2014-12-28

    We examine the claim that in governance for solar climate engineering research, and especially field tests, there is no need for external governance beyond existing mechanisms such as peer review and environmental impact assessments that aim to assess technically defined risks to the physical environment. By drawing on the historical debate on recombinant DNA research, we show that defining risks is not a technical question but a complex process of narrative formation. Governance emerges from within, and as a response to, narratives of what is at stake in a debate. In applying this finding to the case of climate engineering, we find that the emerging narrative differs starkly from the narrative that gave meaning to rDNA technology during its formative period, with important implications for governance. While the narrative of rDNA technology was closed down to narrowly focus on technical risks, that of climate engineering continues to open up and includes social, political and ethical issues. This suggests that, in order to be legitimate, governance must take into account this broad perception of what constitutes the relevant issues and risks of climate engineering, requiring governance that goes beyond existing mechanisms that focus on technical risks. Even small-scale field tests with negligible impacts on the physical environment warrant additional governance as they raise broader concerns that go beyond the immediate impacts of individual experiments. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Modeling of a Turbofan Engine with Ice Crystal Ingestion in the NASA Propulsion System Laboratory

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.; Nili, Samaun

    2017-01-01

    The main focus of this study is to apply a computational tool for the flow analysis of the turbine engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center. The PSL has been used to test a highly instrumented Honeywell ALF502R-5A (LF11) turbofan engine at simulated altitude operating conditions. Test data analysis with an engine cycle code and a compressor flow code was conducted to determine the values of key icing parameters, that can indicate the risk of ice accretion, which can lead to engine rollback (un-commanded loss of engine thrust). The full engine aerothermodynamic performance was modeled with the Honeywell Customer Deck specifically created for the ALF502R-5A engine. The mean-line compressor flow analysis code, which includes a code that models the state of the ice crystal, was used to model the air flow through the fan-core and low pressure compressor. The results of the compressor flow analyses included calculations of the ice-water flow rate to air flow rate ratio (IWAR), the local static wet bulb temperature, and the particle melt ratio throughout the flow field. It was found that the assumed particle size had a large effect on the particle melt ratio, and on the local wet bulb temperature. In this study the particle size was varied parametrically to produce a non-zero calculated melt ratio in the exit guide vane (EGV) region of the low pressure compressor (LPC) for the data points that experienced a growth of blockage there, and a subsequent engine called rollback (CRB). At data points where the engine experienced a CRB having the lowest wet bulb temperature of 492 degrees Rankine at the EGV trailing edge, the smallest particle size that produced a non-zero melt ratio (between 3 percent - 4 percent) was on the order of 1 micron. This value of melt ratio was utilized as the target for all other subsequent data points analyzed, while the particle size was varied from 1 micron - 9.5 microns to achieve the target melt ratio. For data points that did not experience a CRB which had static wet bulb temperatures in the EGV region below 492 degrees Rankine, a non-zero melt ratio could not be achieved even with a 1 micron ice particle size. The highest value of static wet bulb temperature for data points that experienced engine CRB was 498 degrees Rankine with a particle size of 9.5 microns. Based on this study of the LF11 engine test data, the range of static wet bulb temperature at the EGV exit for engine CRB was in the narrow range of 492 degrees Rankine - 498 degrees Rankine , while the minimum value of IWAR was 0.002. The rate of blockage growth due to ice accretion and boundary layer growth was estimated by scaling from a known blockage growth rate that was determined in a previous study. These results obtained from the LF11 engine analysis formed the basis of a unique “icing wedge.”

  2. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Greene, William D.

    2017-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. During the ABEDRR effort, the Dynetics Team has modified flight-proven Apollo-Saturn F-1 engine components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically-produced engine that could potentially both replace the RD-180 on Atlas V and satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article. The test article is scheduled to complete fabrication and assembly soon and continue testing through late 2019. Dynetics has also designed, developed, and built innovative tank and structure assemblies using friction stir welding to leverage recent NASA investments in manufacturing tools, facilities, and processes, significantly reducing development and recurring costs. The full-scale cryotank assembly was used to verify the structural design and prove affordable processes. Dynetics performed hydrostatic and cryothermal proof tests on the assembly to verify the assembly meets performance requirements..

  3. A Sensitivity Study of Commercial Aircraft Engine Response for Emergency Situations

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei

    2011-01-01

    This paper contains the details of a sensitivity study in which the variation in a commercial aircraft engine's outputs is observed for perturbations in its operating condition inputs or control parameters. This study seeks to determine the extent to which various controller limits can be modified to improve engine performance, while capturing the increased risk that results from the changes. In an emergency, the engine may be required to produce additional thrust, respond faster, or both, to improve the survivability of the aircraft. The objective of this paper is to propose changes to the engine controller and determine the costs and benefits of the additional capabilities produced by the engine. This study indicates that the aircraft engine is capable of producing additional thrust, but at the cost of an increased risk of an engine failure due to higher turbine temperatures and rotor speeds. The engine can also respond more quickly to transient commands, but this action reduces the remaining stall margin to possibly dangerous levels. To improve transient response in landing scenarios, a control mode known as High Speed Idle is proposed that increases the responsiveness of the engine and conserves stall margin

  4. Test Planning Approach and Lessons

    NASA Technical Reports Server (NTRS)

    Parkinson, Douglas A.; Brown, Kendall K.

    2004-01-01

    As NASA began technology risk reduction activities and planning for the next generation launch vehicle under the Space Launch Initiative (SLI), now the Next Generation Launch Technology (NGLT) Program, a review of past large liquid rocket engine development programs was performed. The intent of the review was to identify any significant lessons from the development testing programs that could be applied to current and future engine development programs. Because the primary prototype engine in design at the time of this study was the Boeing-Rocketdyne RS-84, the study was slightly biased towards LOX/RP-1 liquid propellant engines. However, the significant lessons identified are universal. It is anticipated that these lessons will serve as a reference for test planning in the Engine Systems Group at Marshall Space Flight Center (MSFC). Towards the end of F-1 and J-2 engine development testing, NASA/MSFC asked Rocketdyne to review those test programs. The result was a document titled, Study to Accelerate Development by Test of a Rocket Engine (R-8099). The "intent (of this study) is to apply this thinking and learning to more efficiently develop rocket engines to high reliability with improved cost effectivenes" Additionally, several other engine programs were reviewed - such as SSME, NSTS, STME, MC-1, and RS-83- to support or refute the R-8099. R-8099 revealed two primary lessons for test planning, which were supported by the other engine development programs. First, engine development programs can benefit from arranging the test program for engine system testing as early as feasible. The best test for determining environments is at the system level, the closest to the operational flight environment. Secondly, the component testing, which tends to be elaborate, should instead be geared towards reducing risk to enable system test. Technical risk can be reduced at the component level, but the design can only be truly verified and validated after engine system testing.

  5. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  6. CECE: A Deep Throttling Demonstrator Cryogenic Engine for NASA's Lunar Lander

    NASA Technical Reports Server (NTRS)

    Giuliano, Victor J.; Leonard, Timothy G.; Adamski, Walter M.; Kim, Tony S.

    2007-01-01

    As one of the first technology development programs awarded under NASA's Vision for Space Exploration, the Pratt & Whitney Rocketdyne (PWR) Deep Throttling, Common Extensible Cryogenic Engine (CECE) program was selected by NASA in November 2004 to begin technology development and demonstration toward a deep throttling, cryogenic Lunar Lander engine for use across multiple human and robotic lunar exploration mission segments with extensibility to Mars. The CECE program leverages the maturity and previous investment of a flight-proven hydrogen/oxygen expander cycle engine, the RL10, to develop and demonstrate an unprecedented combination of reliability, safety, durability, throttlability, and restart capabilities in a high-energy, cryogenic engine. NASA Marshall Space Flight Center and NASA Glenn Research Center personnel were integral design and analysis team members throughout the requirements assessment, propellant studies and the deep throttling demonstrator elements of the program. The testbed selected for the initial deep throttling demonstration phase of this program was a minimally modified RL10 engine, allowing for maximum current production engine commonality and extensibility with minimum program cost. In just nine months from technical program start, CECE Demonstrator No. 1 engine testing in April/May 2006 at PWR's E06 test stand successfully demonstrated in excess of 10:1 throttling of the hydrogen/oxygen expander cycle engine. This test provided an early demonstration of a viable, enabling cryogenic propulsion concept with invaluable system-level technology data acquisition toward design and development risk mitigation for both the subsequent CECE Demonstrator No. 2 program and to the future Lunar Lander Design, Development, Test and Evaluation effort.

  7. Identification and Classification of Common Risks in Space Science Missions

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Chattopadhyay, Debarati; Hanna, Robert A.; Port, Daniel; Eggleston, Sabrina

    2010-01-01

    Due to the highly constrained schedules and budgets that NASA missions must contend with, the identification and management of cost, schedule and risks in the earliest stages of the lifecycle is critical. At the Jet Propulsion Laboratory (JPL) it is the concurrent engineering teams that first address these items in a systematic manner. Foremost of these concurrent engineering teams is Team X. Started in 1995, Team X has carried out over 1000 studies, dramatically reducing the time and cost involved, and has been the model for other concurrent engineering teams both within NASA and throughout the larger aerospace community. The ability to do integrated risk identification and assessment was first introduced into Team X in 2001. Since that time the mission risks identified in each study have been kept in a database. In this paper we will describe how the Team X risk process is evolving highlighting the strengths and weaknesses of the different approaches. The paper will especially focus on the identification and classification of common risks that have arisen during Team X studies of space based science missions.

  8. A review on risk assessment techniques for hydraulic fracturing water and produced water management implemented in onshore unconventional oil and gas production.

    PubMed

    Torres, Luisa; Yadav, Om Prakash; Khan, Eakalak

    2016-01-01

    The objective of this paper is to review different risk assessment techniques applicable to onshore unconventional oil and gas production to determine the risks to water quantity and quality associated with hydraulic fracturing and produced water management. Water resources could be at risk without proper management of water, chemicals, and produced water. Previous risk assessments in the oil and gas industry were performed from an engineering perspective leaving aside important social factors. Different risk assessment methods and techniques are reviewed and summarized to select the most appropriate one to perform a holistic and integrated analysis of risks at every stage of the water life cycle. Constraints to performing risk assessment are identified including gaps in databases, which require more advanced techniques such as modeling. Discussions on each risk associated with water and produced water management, mitigation strategies, and future research direction are presented. Further research on risks in onshore unconventional oil and gas will benefit not only the U.S. but also other countries with shale oil and gas resources. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Performance analysis and dynamic modeling of a single-spool turbojet engine

    NASA Astrophysics Data System (ADS)

    Andrei, Irina-Carmen; Toader, Adrian; Stroe, Gabriela; Frunzulica, Florin

    2017-01-01

    The purposes of modeling and simulation of a turbojet engine are the steady state analysis and transient analysis. From the steady state analysis, which consists in the investigation of the operating, equilibrium regimes and it is based on appropriate modeling describing the operation of a turbojet engine at design and off-design regimes, results the performance analysis, concluded by the engine's operational maps (i.e. the altitude map, velocity map and speed map) and the engine's universal map. The mathematical model that allows the calculation of the design and off-design performances, in case of a single spool turbojet is detailed. An in house code was developed, its calibration was done for the J85 turbojet engine as the test case. The dynamic modeling of the turbojet engine is obtained from the energy balance equations for compressor, combustor and turbine, as the engine's main parts. The transient analysis, which is based on appropriate modeling of engine and its main parts, expresses the dynamic behavior of the turbojet engine, and further, provides details regarding the engine's control. The aim of the dynamic analysis is to determine a control program for the turbojet, based on the results provided by performance analysis. In case of the single-spool turbojet engine, with fixed nozzle geometry, the thrust is controlled by one parameter, which is the fuel flow rate. The design and management of the aircraft engine controls are based on the results of the transient analysis. The construction of the design model is complex, since it is based on both steady-state and transient analysis, further allowing the flight path cycle analysis and optimizations. This paper presents numerical simulations for a single-spool turbojet engine (J85 as test case), with appropriate modeling for steady-state and dynamic analysis.

  10. Estimation of the Long-term Cardiovascular Events Using UKPDS Risk Engine in Metabolic Syndrome Patients.

    PubMed

    Shivakumar, V; Kandhare, A D; Rajmane, A R; Adil, M; Ghosh, P; Badgujar, L B; Saraf, M N; Bodhankar, S L

    2014-03-01

    Long-term cardiovascular complications in metabolic syndrome are a major cause of mortality and morbidity in India and forecasted estimates in this domain of research are scarcely reported in the literature. The aim of present investigation is to estimate the cardiovascular events associated with a representative Indian population of patients suffering from metabolic syndrome using United Kingdom Prospective Diabetes Study risk engine. Patient level data was collated from 567 patients suffering from metabolic syndrome through structured interviews and physician records regarding the input variables, which were entered into the United Kingdom Prospective Diabetes Study risk engine. The patients of metabolic syndrome were selected according to guidelines of National Cholesterol Education Program - Adult Treatment Panel III, modified National Cholesterol Education Program - Adult Treatment Panel III and International Diabetes Federation criteria. A projection for 10 simulated years was run on the engine and output was determined. The data for each patient was processed using the United Kingdom Prospective Diabetes Study risk engine to calculate an estimate of the forecasted value for the cardiovascular complications after a period of 10 years. The absolute risk (95% confidence interval) for coronary heart disease, fatal coronary heart disease, stroke and fatal stroke for 10 years was 3.79 (1.5-3.2), 9.6 (6.8-10.7), 7.91 (6.5-9.9) and 3.57 (2.3-4.5), respectively. The relative risk (95% confidence interval) for coronary heart disease, fatal coronary heart disease, stroke and fatal stroke was 17.8 (12.98-19.99), 7 (6.7-7.2), 5.9 (4.0-6.6) and 4.7 (3.2-5.7), respectively. Simulated projections of metabolic syndrome patients predict serious life-threatening cardiovascular consequences in the representative cohort of patients in western India.

  11. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  12. Importance of Engineering History Education

    NASA Astrophysics Data System (ADS)

    Arakawa, Fumio

    It is needless to cite the importance of education for succeed of engineering. IEEJ called for the establishment of ICEE in 1994, where the education is thought highly of, though its discussion has not been well working. Generally speaking, education has been one of the most important national strategies particularly at a time of its political and economical development. The science and technology education is, of course, not the exemption. But in these days around 2000 it seems that the public pays little attention on the science and technology, as they are quite day to day matters. As the results, for instance, such engineering as power systems and electric heavy machines are referred to as “endangered”. So fur, many engineers have tried not to be involved in social issues. But currently they can not help facing with risks of social issues like patent rights, troubles and accidents due to application of high technology, information security in the use of computers and engineering ethics. One of the most appropriate ways for the risk management is to learn lessons in the past, that is, history, so that the idea suggested in it could be made full use for the risk management. The author cited the global importance of education, particularly of engineering history education for engineering ethics, in the ICEE 2010 held in Bussan, Korea, as the 16th anniversary.

  13. Reliability, Maintenance and Risk Assessment in Naval Architecture and Marine Engineering Education in the US.

    ERIC Educational Resources Information Center

    Inozu, Bahadir; Ayyub, Bilal A.

    1999-01-01

    Examines the current status of existing curricula, accreditation requirements, and new developments in Naval Architecture and Marine Engineering education in the United States. Discusses the emerging needs of the maritime industry in light of advances in information technology and movement toward risk-based, reliability-centered rule making in the…

  14. The Attitude of Civil Engineering Students towards Health and Safety Risk Management: A Case Study

    ERIC Educational Resources Information Center

    Petersen, A. K.; Reynolds, J. H.; Ng, L. W. T.

    2008-01-01

    The highest rate of accidents and injuries in British industries has been reported by the construction industry during the past decade. Since then stakeholders have recognised that a possible solution would be to inculcate a good attitude towards health and safety risk management in undergraduate civil engineering students and construction…

  15. An Engineering Learning Community to Promote Retention and Graduation of At-Risk Engineering Students

    ERIC Educational Resources Information Center

    Ricks, Kenneth G.; Richardson, James A.; Stern, Harold P.; Taylor, Robert P.; Taylor, Ryan A.

    2014-01-01

    Retention and graduation rates for engineering disciplines are significantly lower than desired, and research literature offers many possible causes. Engineering learning communities provide the opportunity to study relationships among specific causes and to develop and evaluate activities designed to lessen their impact. This paper details an…

  16. Environmental mediation: A method for protecting environmental sciences and scientists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigerstad, T.J.; Berdt Romilly, G. de; MacKeigan, P.

    1995-12-31

    The primary role for scientific analysis of environmental and human risks has been to support decisions that have arisen out of a regulatory decision-making model called ``Command and Control`` or ``Decide and Defend``. A project or a policy is proposed and permission for its implementation is sought. Permission-gaining sometimes requires a number of technical documents: Environmental Impact Statements, Public Health Risk Evaluations, policy analysis documents. Usually, little of this analysis is used to make any real decisions. This is a fact that has lead to enormous frustration and an atmosphere of distrust of government, industry and consulting scientists. There havemore » been a number of responses by governmental and industrial managers, some scientists, and even the legal system, to mitigate the frustration and distrust. One response has been to develop methods of packaging information using language which is considered more ``understandable`` to the public: Ecosystem Health, Social Risk Assessment, Economic Risk Management, Enviro-hazard Communication, Risk Focus Analysis, etc. A second is to develop more sophisticated persuasion techniques-a potential misuse of Risk Communication. A third is proposing to change the practice of science itself: e.g., ``post-normal science`` and ``popular epidemiology``. A fourth has been to challenge the definition of ``expert`` in legal proceedings. All of these approaches do not appear to address the underlying issue: lack of trust and credibility. To address this issue requires an understanding of the nature of environmental disputes and the development of an atmosphere of trust and credibility. The authors propose Environmental Mediation as a response to the dilemma faced by professional environmental scientists, engineers, and managers that protects the professionals and their disciplines.« less

  17. Construction risk assessment of deep foundation pit in metro station based on G-COWA method

    NASA Astrophysics Data System (ADS)

    You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying

    2018-05-01

    In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.

  18. Risk-informed Management of Water Infrastructure in the United States: History, Development, and Best Practices

    NASA Astrophysics Data System (ADS)

    Wolfhope, J.

    2017-12-01

    This presentation will focus on the history, development, and best practices for evaluating the risks associated with the portfolio of water infrastructure in the United States. These practices have evolved from the early development of the Federal Guidelines for Dam Safety and the establishment of the National Dam Safety Program, to the most recent update of the Best Practices for Dam and Levee Risk Analysis jointly published by the U.S. Department of Interior Bureau of Reclamation and the U.S. Army Corps of Engineers. Since President Obama signed the Water Infrastructure Improvements for the Nation Act (WIIN) Act, on December 16, 2016, adding a new grant program under FEMA's National Dam Safety Program, the focus has been on establishing a risk-based priority system for use in identifying eligible high hazard potential dams for which grants may be made. Finally, the presentation provides thoughts on the future direction and priorities for managing the risk of dams and levees in the United States.

  19. Assess/Mitigate Risk through the Use of Computer-Aided Software Engineering (CASE) Tools

    NASA Technical Reports Server (NTRS)

    Aguilar, Michael L.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) was requested to perform an independent assessment of the mitigation of the Constellation Program (CxP) Risk 4421 through the use of computer-aided software engineering (CASE) tools. With the cancellation of the CxP, the assessment goals were modified to capture lessons learned and best practices in the use of CASE tools. The assessment goal was to prepare the next program for the use of these CASE tools. The outcome of the assessment is contained in this document.

  20. Space Station logistics policy - Risk management from the top down

    NASA Technical Reports Server (NTRS)

    Paules, Granville; Graham, James L., Jr.

    1990-01-01

    Considerations are presented in the area of risk management specifically relating to logistics and system supportability. These considerations form a basis for confident application of concurrent engineering principles to a development program, aiming at simultaneous consideration of support and logistics requirements within the engineering process as the system concept and designs develop. It is shown that, by applying such a process, the chances of minimizing program logistics and supportability risk in the long term can be improved. The problem of analyzing and minimizing integrated logistics risk for the Space Station Freedom Program is discussed.

  1. Systems Engineering Lessons Learned from Solar Array Structures and Mechanisms Deployment

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin; Kraft, Thomas

    2013-01-01

    This report has been developed by the National Aeronautics and Space Administration (NASA) Human Exploration and Operations Mission Directorate (HEOMD) Risk Management team in close coordination with the Engineering Directorate at LaRC. This document provides a point-in-time, cumulative, summary of actionable key lessons learned derived from the design project. Lessons learned invariably address challenges and risks and the way in which these areas have been addressed. Accordingly the risk management thread is woven throughout the document.

  2. Assessment of exposure to polycyclic aromatic hydrocarbons in engine rooms by measurement of urinary 1-hydroxypyrene.

    PubMed Central

    Moen, B E; Nilsson, R; Nordlinder, R; Ovrebø, S; Bleie, K; Skorve, A H; Hollund, B E

    1996-01-01

    OBJECTIVE: Machinists have an increased risk of lung cancer and bladder cancer, and this may be caused by exposure to carcinogenic compounds such as asbestos and polycyclic aromatic hydrocarbons (PAHs) in the engine room. The aim of this study was to investigate the exposure of engine room personnel to PAHs, with 1-hydroxypyrene in urine as a biomarker. METHODS: Urine samples from engine room personnel (n = 51) on 10 ships arriving in different harbours were collected, as well as urine samples from a similar number of unexposed controls (n = 47) on the same ships. Urinary 1-hydroxypyrene was quantitatively measured by high performance liquid chromatography. The exposure to PAHs was estimated by a questionnaire answered by the engine room personnel. On two ships, air monitoring of PAHs in the engine room was performed at sea. Both personal monitoring and area monitoring were performed. The compounds were analysed by gas chromatography of two types (with a flame ionisation detector and with a mass spectrometer). RESULTS: Significantly more 1-hydroxypyrene was found in urine of personnel who had been working in the engine room for the past 24 hours, than in that of the unexposed seamen. The highest concentrations of 1-hydroxypyrene were found among engine room personnel who had experienced oil contamination of the skin during their work in the engine room. Stepwise logistic regression analysis showed a significant relation between the concentrations of 1-hydroxypyrene, smoking, and estimated exposure to PAHs. No PAHs were detected in the air samples. CONCLUSION: Engine room personnel who experience skin exposure to oil and oil products are exposed to PAHs during their work. This indicates that dermal uptake of PAHs is the major route of exposure. PMID:8943834

  3. Engineering Factor Xa Inhibitor with Multiple Platelet-Binding Sites Facilitates its Platelet Targeting

    NASA Astrophysics Data System (ADS)

    Zhu, Yuanjun; Li, Ruyi; Lin, Yuan; Shui, Mengyang; Liu, Xiaoyan; Chen, Huan; Wang, Yinye

    2016-07-01

    Targeted delivery of antithrombotic drugs centralizes the effects in the thrombosis site and reduces the hemorrhage side effects in uninjured vessels. We have recently reported that the platelet-targeting factor Xa (FXa) inhibitors, constructed by engineering one Arg-Gly-Asp (RGD) motif into Ancylostoma caninum anticoagulant peptide 5 (AcAP5), can reduce the risk of systemic bleeding than non-targeted AcAP5 in mouse arterial injury model. Increasing the number of platelet-binding sites of FXa inhibitors may facilitate their adhesion to activated platelets, and further lower the bleeding risks. For this purpose, we introduced three RGD motifs into AcAP5 to generate a variant NR4 containing three platelet-binding sites. NR4 reserved its inherent anti-FXa activity. Protein-protein docking showed that all three RGD motifs were capable of binding to platelet receptor αIIbβ3. Molecular dynamics simulation demonstrated that NR4 has more opportunities to interact with αIIbβ3 than single-RGD-containing NR3. Flow cytometry analysis and rat arterial thrombosis model further confirmed that NR4 possesses enhanced platelet targeting activity. Moreover, NR4-treated mice showed a trend toward less tail bleeding time than NR3-treated mice in carotid artery endothelium injury model. Therefore, our data suggest that engineering multiple binding sites in one recombinant protein is a useful tool to improve its platelet-targeting efficiency.

  4. Katherine Young, P.E. | NREL

    Science.gov Websites

    ) Water rights and resources engineering Database planning and development Research Interests Collection lean principles to streamline exploration and drilling and reduce error/risk Research, development and Groundwater modeling Quantitative methods in water resource engineering Water resource engineering and

  5. DESCRIPTION OF RISK REDUCTION ENGINEERING LABORATORY TEST AND EVALUATION FACILITIES

    EPA Science Inventory

    An onsite team of multidisciplined engineers and scientists conduct research and provide technical services in the areas of testing, design, and field implementation for both solid and hazardous waste management. Engineering services focus on the design and implementation of...

  6. Risk-based zoning for urbanizing floodplains.

    PubMed

    Porse, Erik

    2014-01-01

    Urban floodplain development brings economic benefits and enhanced flood risks. Rapidly growing cities must often balance the economic benefits and increased risks of floodplain settlement. Planning can provide multiple flood mitigation and environmental benefits by combining traditional structural measures such as levees, increasingly popular landscape and design features (green infrastructure), and non-structural measures such as zoning. Flexibility in both structural and non-structural options, including zoning procedures, can reduce flood risks. This paper presents a linear programming formulation to assess cost-effective urban floodplain development decisions that consider benefits and costs of development along with expected flood damages. It uses a probabilistic approach to identify combinations of land-use allocations (residential and commercial development, flood channels, distributed runoff management) and zoning regulations (development zones in channel) to maximize benefits. The model is applied to a floodplain planning analysis for an urbanizing region in the Baja Sur peninsula of Mexico. The analysis demonstrates how (1) economic benefits drive floodplain development, (2) flexible zoning can improve economic returns, and (3) cities can use landscapes, enhanced by technology and design, to manage floods. The framework can incorporate additional green infrastructure benefits, and bridges typical disciplinary gaps for planning and engineering.

  7. Advances on a Decision Analytic Approach to Exposure-Based Chemical Prioritization.

    PubMed

    Wood, Matthew D; Plourde, Kenton; Larkin, Sabrina; Egeghy, Peter P; Williams, Antony J; Zemba, Valerie; Linkov, Igor; Vallero, Daniel A

    2018-05-11

    The volume and variety of manufactured chemicals is increasing, although little is known about the risks associated with the frequency and extent of human exposure to most chemicals. The EPA and the recent signing of the Lautenberg Act have both signaled the need for high-throughput methods to characterize and screen chemicals based on exposure potential, such that more comprehensive toxicity research can be informed. Prior work of Mitchell et al. using multicriteria decision analysis tools to prioritize chemicals for further research is enhanced here, resulting in a high-level chemical prioritization tool for risk-based screening. Reliable exposure information is a key gap in currently available engineering analytics to support predictive environmental and health risk assessments. An elicitation with 32 experts informed relative prioritization of risks from chemical properties and human use factors, and the values for each chemical associated with each metric were approximated with data from EPA's CP_CAT database. Three different versions of the model were evaluated using distinct weight profiles, resulting in three different ranked chemical prioritizations with only a small degree of variation across weight profiles. Future work will aim to include greater input from human factors experts and better define qualitative metrics. © 2018 Society for Risk Analysis.

  8. Seismic risk assessment and application in the central United States

    USGS Publications Warehouse

    Wang, Z.

    2011-01-01

    Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.

  9. A queueing theory based model for business continuity in hospitals.

    PubMed

    Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R

    2013-01-01

    Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.

  10. Quick Fix for Managing Risks

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Under a Phase II SBIR contract, Kennedy and Lumina Decision Systems, Inc., jointly developed the Schedule and Cost Risk Analysis Modeling (SCRAM) system, based on a version of Lumina's flagship software product, Analytica(R). Acclaimed as "the best single decision-analysis program yet produced" by MacWorld magazine, Analytica is a "visual" tool used in decision-making environments worldwide to build, revise, and present business models, minus the time-consuming difficulty commonly associated with spreadsheets. With Analytica as their platform, Kennedy and Lumina created the SCRAM system in response to NASA's need to identify the importance of major delays in Shuttle ground processing, a critical function in project management and process improvement. As part of the SCRAM development project, Lumina designed a version of Analytica called the Analytica Design Engine (ADE) that can be easily incorporated into larger software systems. ADE was commercialized and utilized in many other developments, including web-based decision support.

  11. Structural integrity of engineering composite materials: a cracking good yarn.

    PubMed

    Beaumont, Peter W R; Soutis, Costas

    2016-07-13

    Predicting precisely where a crack will develop in a material under stress and exactly when in time catastrophic fracture of the component will occur is one the oldest unsolved mysteries in the design and building of large-scale engineering structures. Where human life depends upon engineering ingenuity, the burden of testing to prove a 'fracture safe design' is immense. Fitness considerations for long-life implementation of large composite structures include understanding phenomena such as impact, fatigue, creep and stress corrosion cracking that affect reliability, life expectancy and durability of structure. Structural integrity analysis treats the design, the materials used, and figures out how best components and parts can be joined, and takes service duty into account. However, there are conflicting aims in the complete design process of designing simultaneously for high efficiency and safety assurance throughout an economically viable lifetime with an acceptable level of risk. This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).

  12. Safety I-II, resilience and antifragility engineering: a debate explained through an accident occurring on a mobile elevating work platform.

    PubMed

    Martinetti, Alberto; Chatzimichailidou, Maria Mikela; Maida, Luisa; van Dongen, Leo

    2018-04-24

    Occupational health and safety (OHS) represents an important field of exploration for the research community: in spite of the growth of technological innovations, the increasing complexity of systems involves critical issues in terms of degradation of the safety levels. In such a situation, new safety management approaches are now mandatory in order to face the safety implications of the current technological evolutions. Along these lines, performing risk-based analysis alone seems not to be enough anymore. The evaluation of robustness, antifragility and resilience of a socio-technical system is now indispensable in order to face unforeseen events. This article will briefly introduce the topics of Safety I and Safety II, resilience engineering and antifragility engineering, explaining correlations, overlapping aspects and synergies. Secondly, the article will discuss the applications of those paradigms to a real accident, highlighting how they can challenge, stimulate and inspire research for improving OHS conditions.

  13. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  14. European Software Engineering Process Group Conference (2nd Annual), EUROPEAN SEPG󈨥. Delegate Material, Tutorials

    DTIC Science & Technology

    1997-06-17

    There is Good and Bad News With CMMs8 *bad news: process improvement takes time *good news: the first benefit Is better schedule management With PSP s...e g similar supp v EURO not sudden death toolset for assessment and v EURO => Business benefits detailed analysis) . EURO could collapse (low risk...from SPI live on even after year 2000. Priority BENEFITS Actions * Improved management and application development processes * Strengthened Change

  15. Risk analysis of the thermal sterilization process. Analysis of factors affecting the thermal resistance of microorganisms.

    PubMed

    Akterian, S G; Fernandez, P S; Hendrickx, M E; Tobback, P P; Periago, P M; Martinez, A

    1999-03-01

    A risk analysis was applied to experimental heat resistance data. This analysis is an approach for processing experimental thermobacteriological data in order to study the variability of D and z values of target microorganisms depending on the deviations range of environmental factors, to determine the critical factors and to specify their critical tolerance. This analysis is based on sets of sensitivity functions applied to a specific case of experimental data related to the thermoresistance of Clostridium sporogenes and Bacillus stearothermophilus spores. The effect of the following factors was analyzed: the type of target microorganism; nature of the heating substrate; pH, temperature; type of acid employed and NaCl concentration. The type of target microorganism to be inactivated, the nature of the substrate (reference or real food) and the heating temperature were identified as critical factors, determining about 90% of the alteration of the microbiological risk. The effect of the type of acid used for the acidification of products and the concentration of NaCl can be assumed to be negligible factors for the purposes of engineering calculations. The critical non-uniformity in temperature during thermobacteriological studies was set as 0.5% and the critical tolerances of pH value and NaCl concentration were 5%. These results are related to a specific case study, for that reason their direct generalization is not correct.

  16. Fuzzy-logic-based network for complex systems risk assessment: application to ship performance analysis.

    PubMed

    Abou, Seraphin C

    2012-03-01

    In this paper, a new interpretation of intuitionistic fuzzy sets in the advanced framework of the Dempster-Shafer theory of evidence is extended to monitor safety-critical systems' performance. Not only is the proposed approach more effective, but it also takes into account the fuzzy rules that deal with imperfect knowledge/information and, therefore, is different from the classical Takagi-Sugeno fuzzy system, which assumes that the rule (the knowledge) is perfect. We provide an analytical solution to the practical and important problem of the conceptual probabilistic approach for formal ship safety assessment using the fuzzy set theory that involves uncertainties associated with the reliability input data. Thus, the overall safety of the ship engine is investigated as an object of risk analysis using the fuzzy mapping structure, which considers uncertainty and partial truth in the input-output mapping. The proposed method integrates direct evidence of the frame of discernment and is demonstrated through references to examples where fuzzy set models are informative. These simple applications illustrate how to assess the conflict of sensor information fusion for a sufficient cooling power system of vessels under extreme operation conditions. It was found that propulsion engine safety systems are not only a function of many environmental and operation profiles but are also dynamic and complex. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.

  18. Social engineering: mitigating a stealthy risk.

    PubMed

    Maas, Jos

    2014-01-01

    Can a Healthcare Facility (HCF) be victimized by Social Engineering (SE)? Yes, says the author If so, what can you do about it? This article explains what Social Engineering is; how it is used; and how to use proactive security to prevent such an attack.

  19. Probabilistic Risk Analysis of Groundwater Related Problems in Subterranean Excavation Sites

    NASA Astrophysics Data System (ADS)

    Sanchez-Vila, X.; Jurado, A.; de Gaspari, F.; Vilarrasa, V.; Bolster, D.; Fernandez-Garcia, D.; Tartakovsky, D. M.

    2009-12-01

    Construction of subterranean excavations in densely populated areas is inherently hazardous. The number of construction sites (e.g., subway lines, railways and highway tunnels) has increased in recent years. These sites can pose risks to workers at the site as well as cause damage to surrounding buildings. The presence of groundwater makes the excavation even more complicated. We develop a probabilistic risk assessment (PRA) model o estimate the likelihood of occurrence of certain risks during a subway station construction. While PRA is widely used in many engineering fields, its applications to the underground constructions in general and to an underground station construction in particular are scarce if not nonexistent. This method enables us not only to evaluate the probability of failure, but also to quantify the uncertainty of the different events considered. The risk analysis was carried out using a fault tree analysis that made it possible to study a complex system in a structured and straightforward manner. As an example we consider an underground station for the new subway line in the Barcelona metropolitan area (Línia 9) through the town of Prat de Llobregat in the Llobregat River Delta, which is currently under development. A typical station on the L9 line lies partially between the shallow and the main aquifer. Specifically, it is located in the middle layer which is made up of silts and clays. By presenting this example we aim to illustrate PRA as an effective methodology for estimating and minimising risks and to demonstrate its utility as a potential tool for decision making.

  20. Practical Application of PRA as an Integrated Design Tool for Space Systems

    NASA Technical Reports Server (NTRS)

    Kalia, Prince; Shi, Ying; Pair, Robin; Quaney, Virginia; Uhlenbrock, John

    2013-01-01

    This paper presents the application of the first comprehensive Probabilistic Risk Assessment (PRA) during the design phase of a joint NASA/NOAA weather satellite program, Geostationary Operational Environmental Satellite Series R (GOES-R). GOES-R is the next generation weather satellite primarily to help understand the weather and help save human lives. PRA has been used at NASA for Human Space Flight for many years. PRA was initially adopted and implemented in the operational phase of manned space flight programs and more recently for the next generation human space systems. Since its first use at NASA, PRA has become recognized throughout the Agency as a method of assessing complex mission risks as part of an overall approach to assuring safety and mission success throughout project lifecycles. PRA is now included as a requirement during the design phase of both NASA next generation manned space vehicles as well as for high priority robotic missions. The influence of PRA on GOES-R design and operation concepts are discussed in detail. The GOES-R PRA is unique at NASA for its early implementation. It also represents a pioneering effort to integrate risks from both Spacecraft (SC) and Ground Segment (GS) to fully assess the probability of achieving mission objectives. PRA analysts were actively involved in system engineering and design engineering to ensure that a comprehensive set of technical risks were correctly identified and properly understood from a design and operations perspective. The analysis included an assessment of SC hardware and software, SC fault management system, GS hardware and software, common cause failures, human error, natural hazards, solar weather and infrastructure (such as network and telecommunications failures, fire). PRA findings directly resulted in design changes to reduce SC risk from micro-meteoroids. PRA results also led to design changes in several SC subsystems, e.g. propulsion, guidance, navigation and control (GNC), communications, mechanisms, and command and data handling (C&DH). The fault tree approach assisted in the development of the fault management system design. Human error analysis, which examined human response to failure, indicated areas where automation could reduce the overall probability of gaps in operation by half. In addition, the PRA brought to light many potential root causes of system disruptions, including earthquakes, inclement weather, solar storms, blackouts and other extreme conditions not considered in the typical reliability and availability analyses. Ultimately the PRA served to identify potential failures that, when mitigated, resulted in a more robust design, as well as to influence the program's concept of operations. The early and active integration of PRA with system and design engineering provided a well-managed approach for risk assessment that increased reliability and availability, optimized lifecyc1e costs, and unified the SC and GS developments.

  1. Risk Acceptance Personality Paradigm: How We View What We Don't Know We Don't Know

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2011-01-01

    The purpose of integrated hazard analyses, probabilistic risk assessments, failure modes and effects analyses, fault trees and many other similar tools is to give managers of a program some idea of the risks associated with their program. All risk tools establish a set of undesired events and then try to evaluate the risk to the program by assessing the severity of the undesired event and the likelihood of that event occurring. Some tools provide qualitative results, some provide quantitative results and some do both. However, in the end the program manager and his/her team must decide which risks are acceptable and which are not. Even with a wide array of analysis tools available, risk acceptance is often a controversial and difficult decision making process. And yet, today's space exploration programs are moving toward more risk based design approaches. Thus, risk identification and good risk assessment is becoming even more vital to the engineering development process. This paper explores how known and unknown information influences risk-based decisions by looking at how the various parts of our personalities are affected by what they know and what they don't know. This paper then offers some criteria for consideration when making risk-based decisions.

  2. Engine management during NTRE start up

    NASA Technical Reports Server (NTRS)

    Bulman, Mel; Saltzman, Dave

    1993-01-01

    The topics are presented in viewgraph form and include the following: total engine system management critical to successful nuclear thermal rocket engine (NTRE) start up; NERVA type engine start windows; reactor power control; heterogeneous reactor cooling; propellant feed system dynamics; integrated NTRE start sequence; moderator cooling loop and efficient NTRE starting; analytical simulation and low risk engine development; accurate simulation through dynamic coupling of physical processes; and integrated NTRE and mission performance.

  3. Production of biofuels and biochemicals: in need of an ORACLE.

    PubMed

    Miskovic, Ljubisa; Hatzimanikatis, Vassily

    2010-08-01

    The engineering of cells for the production of fuels and chemicals involves simultaneous optimization of multiple objectives, such as specific productivity, extended substrate range and improved tolerance - all under a great degree of uncertainty. The achievement of these objectives under physiological and process constraints will be impossible without the use of mathematical modeling. However, the limited information and the uncertainty in the available information require new methods for modeling and simulation that will characterize the uncertainty and will quantify, in a statistical sense, the expectations of success of alternative metabolic engineering strategies. We discuss these considerations toward developing a framework for the Optimization and Risk Analysis of Complex Living Entities (ORACLE) - a computational method that integrates available information into a mathematical structure to calculate control coefficients. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. A workshop on developing risk assessment methods for medical use of radioactive material. Volume 2: Supporting documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tortorelli, J.P.

    A workshop was held at the Idaho National Engineering Laboratory, August 16--18, 1994 on the topic of risk assessment on medical devices that use radioactive isotopes. Its purpose was to review past efforts to develop a risk assessment methodology to evaluate these devices, and to develop a program plan and a scoping document for future methodology development. This report contains presentation material and a transcript of the workshop. Participants included experts in the fields of radiation oncology, medical physics, risk assessment, human-error analysis, and human factors. Staff from the US Nuclear Regulatory Commission (NRC) associated with the regulation of medicalmore » uses of radioactive materials and with research into risk-assessment methods participated in the workshop. The workshop participants concurred in NRC`s intended use of risk assessment as an important technology in the development of regulations for the medical use of radioactive material and encouraged the NRC to proceed rapidly with a pilot study. Specific recommendations are included in the executive summary and the body of this report.« less

  5. Age patterns of smoking initiation among Kuwait university male students.

    PubMed

    Sugathan, T N; Moody, P M; Bustan, M A; Elgerges, N S

    1998-12-01

    The present study is a detailed evaluation of age at smoking initiation among university male students in Kuwait based on a random sample of 664 students selected from all students during 1993. The Acturial Life Table analysis revealed that almost one tenth of the students initiated cigarette smoking between ages 16 and 17 with the rate of initiation increasing rapidly thereafter and reaching 30% by age 20 and almost 50% by the time they celebrate their 24th birthday. The most important environmental risk factor positively associated for smoking initiation was observed to be the history of smoking among siblings with a relative risk of 1.4. Compared to students of medicine and engineering, the students of other faculties revealed a higher risk in smoking initiation with an RR = 1.77 for sciences and commerce and 1.61 for other faculties (arts, law, education and Islamic studies). The analysis revealed a rising generation trend in cigarette smoking. There is a need for reduction of this trend among young adults in Kuwait and throughout other countries in the region.

  6. Robotic Mars Sample Return: Risk Assessment and Analysis Report

    NASA Technical Reports Server (NTRS)

    Lalk, Thomas R.; Spence, Cliff A.

    2003-01-01

    A comparison of the risk associated with two alternative scenarios for a robotic Mars sample return mission was conducted. Two alternative mission scenarios were identified, the Jet Propulsion Lab (JPL) reference Mission and a mission proposed by Johnson Space Center (JSC). The JPL mission was characterized by two landers and an orbiter, and a Mars orbit rendezvous to retrieve the samples. The JSC mission (Direct/SEP) involves a solar electric propulsion (SEP) return to earth followed by a rendezvous with the space shuttle in earth orbit. A qualitative risk assessment to identify and characterize the risks, and a risk analysis to quantify the risks were conducted on these missions. Technical descriptions of the competing scenarios were developed in conjunction with NASA engineers and the sequence of events for each candidate mission was developed. Risk distributions associated with individual and combinations of events were consolidated using event tree analysis in conjunction with Monte Carlo techniques to develop probabilities of mission success for each of the various alternatives. The results were the probability of success of various end states for each candidate scenario. These end states ranged from complete success through various levels of partial success to complete failure. Overall probability of success for the Direct/SEP mission was determined to be 66% for the return of at least one sample and 58% for the JPL mission for the return of at least one sample cache. Values were also determined for intermediate events and end states as well as for the probability of violation of planetary protection. Overall mission planetary protection event probabilities of occurrence were determined to be 0.002% and 1.3% for the Direct/SEP and JPL Reference missions respectively.

  7. A risk analysis of winter navigation in Finnish sea areas.

    PubMed

    Valdez Banda, Osiris A; Goerlandt, Floris; Montewka, Jakub; Kujala, Pentti

    2015-06-01

    Winter navigation is a complex but common operation in north-European sea areas. In Finnish waters, the smooth flow of maritime traffic and safety of vessel navigation during the winter period are managed through the Finnish-Swedish winter navigation system (FSWNS). This article focuses on accident risks in winter navigation operations, beginning with a brief outline of the FSWNS. The study analyses a hazard identification model of winter navigation and reviews accident data extracted from four winter periods. These are adopted as a basis for visualizing the risks in winter navigation operations. The results reveal that experts consider ship independent navigation in ice conditions the most complex navigational operation, which is confirmed by accident data analysis showing that the operation constitutes the type of navigation with the highest number of accidents reported. The severity of the accidents during winter navigation is mainly categorized as less serious. Collision is the most typical accident in ice navigation and general cargo the type of vessel most frequently involved in these accidents. Consolidated ice, ice ridges and ice thickness between 15 and 40cm represent the most common ice conditions in which accidents occur. Thus, the analysis presented in this article establishes the key elements for identifying the operation types which would benefit most from further safety engineering and safety or risk management development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A simple method of calculating Stirling engines for engine design optimization

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.

  9. Research and application of borehole structure optimization based on pre-drill risk assessment

    NASA Astrophysics Data System (ADS)

    Zhang, Guohui; Liu, Xinyun; Chenrong; Hugui; Yu, Wenhua; Sheng, Yanan; Guan, Zhichuan

    2017-11-01

    Borehole structure design based on pre-drill risk assessment and considering risks related to drilling operation is the pre-condition for safe and smooth drilling operation. Major risks of drilling operation include lost circulation, blowout, sidewall collapsing, sticking and failure of drilling tools etc. In the study, studying data from neighboring wells was used to calculate the profile of formation pressure with credibility in the target well, then the borehole structure design for the target well assessment by using the drilling risk assessment to predict engineering risks before drilling. Finally, the prediction results were used to optimize borehole structure design to prevent such drilling risks. The newly-developed technique provides a scientific basis for lowering probability and frequency of drilling engineering risks, and shortening time required to drill a well, which is of great significance for safe and high-efficient drilling.

  10. Hyper-X Flight Engine Ground Testing for X-43 Flight Risk Reduction

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Rock, Kenneth E.; Ruf, Edward G.; Witte, David W.; Andrews, Earl H., Jr.

    2001-01-01

    Airframe-integrated scramjet engine testing has been completed at Mach 7 flight conditions in the NASA Langley 8-Foot High Temperature Tunnel as part of the NASA Hyper-X program. This test provided engine performance and operability data, as well as design and database verification, for the Mach 7 flight tests of the Hyper-X research vehicle (X-43), which will provide the first-ever airframe-integrated scramjet data in flight. The Hyper-X Flight Engine, a duplicate Mach 7 X-43 scramjet engine, was mounted on an airframe structure that duplicated the entire three-dimensional propulsion flowpath from the vehicle leading edge to the vehicle trailing edge. This model was also tested to verify and validate the complete flight-like engine system. This paper describes the subsystems that were subjected to flight-like conditions and presents supporting data. The results from this test help to reduce risk for the Mach 7 flights of the X-43.

  11. Emotional engineers: toward morally responsible design.

    PubMed

    Roeser, Sabine

    2012-03-01

    Engineers are normally seen as the archetype of people who make decisions in a rational and quantitative way. However, technological design is not value neutral. The way a technology is designed determines its possibilities, which can, for better or for worse, have consequences for human wellbeing. This leads various scholars to the claim that engineers should explicitly take into account ethical considerations. They are at the cradle of new technological developments and can thereby influence the possible risks and benefits more directly than anybody else. I have argued elsewhere that emotions are an indispensable source of ethical insight into ethical aspects of risk. In this paper I will argue that this means that engineers should also include emotional reflection into their work. This requires a new understanding of the competencies of engineers: they should not be unemotional calculators; quite the opposite, they should work to cultivate their moral emotions and sensitivity, in order to be engaged in morally responsible engineering. © The Author(s) 2010. This article is published with open access at Springerlink.com

  12. Computational Fluid Dynamics (CFD) Analysis for the Reduction of Impeller Discharge Flow Distortion

    NASA Technical Reports Server (NTRS)

    Garcia, R.; McConnaughey, P. K.; Eastland, A.

    1993-01-01

    The use of Computational Fluid Dynamics (CFD) in the design and analysis of high performance rocket engine pumps has increased in recent years. This increase has been aided by the activities of the Marshall Space Flight Center (MSFC) Pump Stage Technology Team (PSTT). The team's goals include assessing the accuracy and efficiency of several methodologies and then applying the appropriate methodology(s) to understand and improve the flow inside a pump. The PSTT's objectives, team membership, and past activities are discussed in Garcia1 and Garcia2. The PSTT is one of three teams that form the NASA/MSFC CFD Consortium for Applications in Propulsion Technology (McConnaughey3). The PSTT first applied CFD in the design of the baseline consortium impeller. This impeller was designed for the Space Transportation Main Engine's (STME) fuel turbopump. The STME fuel pump was designed with three impeller stages because a two-stage design was deemed to pose a high developmental risk. The PSTT used CFD to design an impeller whose performance allowed for a two-stage STME fuel pump design. The availability of this design would have lead to a reduction in parts, weight, and cost had the STME reached production. One sample of the baseline consortium impeller was manufactured and tested in a water rig. The test data showed that the impeller performance was as predicted and that a two-stage design for the STME fuel pump was possible with minimal risk. The test data also verified another CFD predicted characteristic of the design that was not desirable. The classical 'jet-wake' pattern at the impeller discharge was strengthened by two aspects of the design: by the high head coefficient necessary for the required pressure rise and by the relatively few impeller exit blades, 12, necessary to reduce manufacturing cost. This 'jet-wake pattern produces an unsteady loading on the diffuser vanes and has, in past rocket engine programs, lead to diffuser structural failure. In industrial applications, this problem is typically avoided by increasing the space between the impeller and the diffuser to allow the dissipation of this pattern and, hence, the reduction of diffuser vane unsteady loading. This approach leads to small performance losses and, more importantly in rocket engine applications, to significant increases in the pump's size and weight. This latter consideration typically makes this approach unacceptable in high performance rocket engines.

  13. Humanitarian engineering in the engineering curriculum

    NASA Astrophysics Data System (ADS)

    Vandersteen, Jonathan Daniel James

    There are many opportunities to use engineering skills to improve the conditions for marginalized communities, but our current engineering education praxis does not instruct on how engineering can be a force for human development. In a time of great inequality and exploitation, the desire to work with the impoverished is prevalent, and it has been proposed to adjust the engineering curriculum to include a larger focus on human needs. This proposed curriculum philosophy is called humanitarian engineering. Professional engineers have played an important role in the modern history of power, wealth, economic development, war, and industrialization; they have also contributed to infrastructure, sanitation, and energy sources necessary to meet human need. Engineers are currently at an important point in time when they must look back on their history in order to be more clear about how to move forward. The changing role of the engineer in history puts into context the call for a more balanced, community-centred engineering curriculum. Qualitative, phenomenographic research was conducted in order to understand the need, opportunity, benefits, and limitations of a proposed humanitarian engineering curriculum. The potential role of the engineer in marginalized communities and details regarding what a humanitarian engineering program could look like were also investigated. Thirty-two semi-structured research interviews were conducted in Canada and Ghana in order to collect a pool of understanding before a phenomenographic analysis resulted in five distinct outcome spaces. The data suggests that an effective curriculum design will include teaching technical skills in conjunction with instructing about issues of social justice, social location, cultural awareness, root causes of marginalization, a broader understanding of technology, and unlearning many elements about the role of the engineer and the dominant economic/political ideology. Cross-cultural engineering development placements are a valuable pedagogical experience but risk benefiting the student disproportionately more than the receiving community. Local development placements offer different rewards and liabilities. To conclude, a major adjustment in engineering curriculum to address human development is appropriate and this new curriculum should include both local and international placements. However, the great force of altruism must be directed towards creating meaningful and lasting change.

  14. GO/NO-GO - When is medical hazard mitigation acceptable for launch?

    NASA Technical Reports Server (NTRS)

    Hamilton, Douglas R.; Polk, James D.

    2005-01-01

    Medical support of spaceflight missions is composed of complex tasks and decisions that dedicated to maintaining the health and performance of the crew and the completion of mission objectives. Spacecraft represent one of the most complex vehicles built by humans, and are built to very rigorous design specifications. In the course of a Flight Readiness Review (FRR) or a mission itself, the flight surgeon must be able to understand the impact of hazards and risks that may not be completely mitigated by design alone. Some hazards are not mitigated because they are never actually identified. When a hazard is identified, it must be reduced or waivered. Hazards that cannot be designed out of the vehicle or mission, are usually mitigated through other means to bring the residual risk to an acceptable level. This is possible in most engineered systems because failure modes are usually predictable and analysis can include taking these systems to failure. Medical support of space missions is complicated by the inability of flight surgeons to provide "exact" hazard and risk numbers to the NASA engineering community. Taking humans to failure is not an option. Furthermore, medical dogma is mostly comprised of "medical prevention" strategies that mitigate risk by examining the behaviour of a cohort of humans similar to astronauts. Unfortunately, this approach does not lend itself well for predicting the effect of a hazard in the unique environment of space. This presentation will discuss how Medical Operations uses an evidence-based approach to decide if hazard mitigation strategies are adequate to reduce mission risk to acceptable levels. Case studies to be discussed will include: 1. Risk of electrocution risk during EVA 2. Risk of cardiac event risk during long and short duration missions 3. Degraded cabin environmental monitoring on the ISS. Learning Objectives 1.) The audience will understand the challenges of mitigating medical risk caused by nominal and off-nominal mission events. 2.) The audience will understand the process by which medical hazards are identified and mitigated before launch. 3.) The audience will understand the roles and responsibilities of all the other flight control positions in participating in the process of reducing hazards and reducing medical risk to an acceptable level.

  15. Proceedings of the 1998 diesel engine emissions reduction workshop [DEER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This workshop was held July 6--9, 1998 in Castine, Maine. The purpose of this workshop was to provide a multidisciplinary forum for exchange of state-of-the-art information on reduction of diesel engine emissions. Attention was focused on the following: agency/organization concerns on engine emissions; diesel engine issues and challenges; health risks from diesel engines emissions; fuels and lubrication technologies; non-thermal plasma and urea after-treatment technologies; and diesel engine technologies for emission reduction 1 and 2.

  16. Variable Cycle Engine Technology Program Planning and Definition Study

    NASA Technical Reports Server (NTRS)

    Westmoreland, J. S.; Stern, A. M.

    1978-01-01

    The variable stream control engine, VSCE-502B, was selected as the base engine, with the inverted flow engine concept selected as a backup. Critical component technologies were identified, and technology programs were formulated. Several engine configurations were defined on a preliminary basis to serve as demonstration vehicles for the various technologies. The different configurations present compromises in cost, technical risk, and technology return. Plans for possible variably cycle engine technology programs were formulated by synthesizing the technology requirements with the different demonstrator configurations.

  17. COBRA System Engineering Processes to Achieve SLI Strategic Goals

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2003-01-01

    The COBRA Prototype Main Engine Development Project was an endeavor conducted as a joint venture between Pratt & Whitney and Aerojet to conduct risk reduction in LOX/LH2 main engine technology for the NASA Space Launch Initiative (SLI). During the seventeen months of the project (April 2001 to September 2002), approximately seventy reviews were conducted, beginning with the Engine Systems Requirements Review (SRR) and ending with the Engine Systems Interim Design Review (IDR). This paper discusses some of the system engineering practices used to support the reviews and the overall engine development effort.

  18. Comparison of the Framingham Risk Score, UKPDS Risk Engine, and SCORE for Predicting Carotid Atherosclerosis and Peripheral Arterial Disease in Korean Type 2 Diabetic Patients.

    PubMed

    Ahn, Hye-Ran; Shin, Min-Ho; Yun, Woo-Jun; Kim, Hye-Yeon; Lee, Young-Hoon; Kweon, Sun-Seog; Rhee, Jung-Ae; Choi, Jin-Su; Choi, Seong-Woo

    2011-03-01

    To compare the predictability of the Framingham Risk Score (FRS), United Kingdom Prospective Diabetes Study (UKPDS) risk engine, and the Systematic Coronary Risk Evaluation (SCORE) for carotid atherosclerosis and peripheral arterial disease in Korean type 2 diabetic patients. Among 1,275 registered type 2 diabetes patients in the health center, 621 subjects with type 2 diabetes participated in the study. Well-trained examiners measured the carotid intima-media thickness (IMT), carotid plaque, and ankle brachial index (ABI). The subject's 10-year risk of coronary heart disease was calculated according to the FRS, UKPDS, and SCORE risk scores. These three risk scores were compared to the areas under the curve (AUC). The odds ratios (ORs) of all risk scores increased as the quartiles increased for plaque, IMT, and ABI. For plaque and IMT, the UKPDS risk score provided the highest OR (95% confidence interval) at 3.82 (2.36, 6.17) and at 6.21 (3.37, 11.45). For ABI, the SCORE risk estimation provided the highest OR at 7.41 (3.20, 17.18). However, no significant difference was detected for plaque, IMT, or ABI (P = 0.839, 0.313, and 0.113, respectively) when the AUCs of the three risk scores were compared. When we graphed the Kernel density distribution of these three risk scores, UKPDS had a higher distribution than FRS and SCORE. No significant difference was observed when comparing the predictability of the FRS, UKPDS risk engine, and SCORE risk estimation for carotid atherosclerosis and peripheral arterial disease in Korean type 2 diabetic patients.

  19. Use of focus groups to develop methods to communicate cardiovascular disease risk and potential for risk reduction to people with type 2 diabetes.

    PubMed

    Price, Hermione C; Dudley, Christina; Barrow, Beryl; Kennedy, Ian; Griffin, Simon J; Holman, Rury R

    2009-10-01

    People need to perceive a risk in order to build an intention-to-change behaviour yet our ability to interpret information about risk is highly variable. We aimed to use a user-centred design process to develop an animated interface for the UK Prospective Diabetes Study (UKPDS) Risk Engine to illustrate cardiovascular disease (CVD) risk and the potential to reduce this risk. In addition, we sought to use the same approach to develop a brief lifestyle advice intervention. Three focus groups were held. Participants were provided with examples of materials used to communicate CVD risk and a leaflet containing a draft brief lifestyle advice intervention and considered their potential to increase motivation-to-change behaviours including diet, physical activity, and smoking in order to reduce CVD risk. Discussions were tape-recorded, transcribed and coded and recurring themes sought. Sixty-two percent of participants were male, mean age was 66 years (range = 47-76 years) and median age at leaving full-time education was 18 years (range = 15-40 years). Sixteen had type 2 diabetes and none had a prior history of CVD. Recurring themes from focus group discussions included the following: being less numerate is common, CVD risk reduction is important and a clear visual representation aids comprehension. A simple animated interface of the UKPDS Risk Engine to illustrate CVD risk and the potential for reducing this risk has been developed for use as a motivational tool, along with a brief lifestyle advice intervention. Future work will investigate whether use of this interactive version of the UKPDS Risk Engine and brief lifestyle advice is associated with increased behavioural intentions and changes in health behaviours designed to reduce CVD risk.

  20. Continuous Risk Management: An Overview

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hammer, Theodore F.

    1999-01-01

    Software risk management is important because it helps avoid disasters, rework, and overkill, but more importantly because it stimulates win-win situations. The objectives of software risk management are to identify, address, and eliminate software risk items before they become threats to success or major sources of rework. In general, good project managers are also good managers of risk. It makes good business sense for all software development projects to incorporate risk management as part of project management. The Software Assurance Technology Center (SATC) at NASA GSFC has been tasked with the responsibility for developing and teaching a systems level course for risk management that provides information on how to implement risk management. The course was developed in conjunction with the Software Engineering Institute at Carnegie Mellon University, then tailored to the NASA systems community. This is an introductory tutorial to continuous risk management based on this course. The rational for continuous risk management and how it is incorporated into project management are discussed. The risk management structure of six functions is discussed in sufficient depth for managers to understand what is involved in risk management and how it is implemented. These functions include: (1) Identify the risks in a specific format; (2) Analyze the risk probability, impact/severity, and timeframe; (3) Plan the approach; (4) Track the risk through data compilation and analysis; (5) Control and monitor the risk; (6) Communicate and document the process and decisions.

  1. RiskLab - a joint Teaching Lab on Hazard and Risk Management

    NASA Astrophysics Data System (ADS)

    Baruffini, Mi.; Baruffini, Mo.; Thuering, M.

    2009-04-01

    In the future natural disasters are expected to increase due to climatic changes that strongly affect environmental, social and economical systems. For this reason and because of the limited resources, governments require analytical risk analysis for a better mitigation planning. Risk analysis is a process to determine the nature and extent of risk by estimating potential hazards and evaluating existing conditions of vulnerability that could pose a potential threat or harm to people, property, livelihoods and environment. This process has become a generally accepted approach for the assessment of cost-benefit scenarios; originating from technical risks it is being applied to natural hazards for several years now in Switzerland. Starting from these premises "Risk Lab", a joint collaboration between the Institute of Earth Sciences of the University of Applied Sciences of Southern Switzerland and the Institute for Economic Research of the University of Lugano, has been started in 2006, aiming to become a competence centre about Risk Analysis and Evaluation. The main issue studied by the lab concerns the topic "What security at what price?" and the activities follow the philosophy of the integral risk management as proposed by PLANAT, that defines the process as a cycle that contains different and interrelated phases. The final aim is to change the population and technician idea about risk from "defending against danger" to "being aware of risks" through a proper academic course specially addressed to young people. In fact the most important activity of the laboratory consists in a degree course, offered both to Engineering and Architecture students of the University of Applied Sciences of Southern Switzerland and Economy Students of the University of Lugano. The course is structured in two main parts: an introductive, theoretical part, composed by class lessons, where the main aspects of natural hazards, risk perception and evaluation and risk management are presented and analyzed, and a second part, composed by practical activities, where students can learn specific statistical methods and test and use technical software. Special importance is given to seminars held by experts or members of Civil Protection and risk management institutes. Excursions are often organized to directly see and study practical case studies (Eg. The city of Locarno and the lake Maggiore inundations). The course is organized following a "classical" structure (it's mainly held in a class or in an informatics lab), but students can also benefit from a special web portal, powered by "e.coursers" , the official USI/SUPSI Learning Management System , where they can find issues and documents about natural hazards and risk management. The main pedagogical value is that students can attend a course which is entirely devoted to dealing with natural and man-made hazards and risk, allowing them to resume geological, space planning and economic issues and to face real case studies in a challenging and holistic environment. The final aim of the course is to provide students an useful and integrated "toolbox", essential to cope with and to resolve the overwhelming problems due to vulnerability and danger increase of the present-day society. The course has by now reached the third academic year and the initial results are encouraging: beyond the knowledge and expertise acquired, the graduate students, that are now for the most part working in engineering studies or private companies, have shown to have acquired a mentality devoted to understanding and managing risk. REFERENCES PLANAT HTTP://WWW.CENAT.CH/INDEX.PHP?USERHASH=79598753&L=D&NAVID=154 ECOURSES HTTP://CORSI.ELEARNINGLAB.ORG/ NAHRIS HTTP://WWW.NAHRIS.CH/

  2. Exposure assessment and risk management of engineered nanoparticles: Investigation in semiconductor wafer processing

    NASA Astrophysics Data System (ADS)

    Shepard, Michele N.

    Engineered nanomaterials (ENMs) are currently used in hundreds of commercial products and industrial processes, with more applications being investigated. Nanomaterials have unique properties that differ from bulk materials. While these properties may enable technological advancements, the potential risks of ENMs to people and the environment are not yet fully understood. Certain low solubility nanoparticles are more toxic than their bulk material, such that existing occupational exposure limits may not be sufficiently protective for workers. Risk assessments are currently challenging due to gaps in data on the numerous emerging materials and applications as well as method uncertainties and limitations. Chemical mechanical planarization (CMP) processes with engineered nanoparticle abrasives are used for research and commercial manufacturing applications in the semiconductor and related industries. Despite growing use, no published studies addressed occupational exposures to nanoparticles associated with CMP or risk assessment and management practices for these scenarios. Additional studies are needed to evaluate potential sources of workplace exposure or emission, as well as to help test and refine assessment methods. This research was conducted to: identify the lifecycle stages and potential exposure sources for ENMs in CMP processes; characterize worker exposure; determine recommended engineering controls and compare risk assessment models. The study included workplace air and surface sampling and an evaluation of qualitative risk banding approaches. Exposure assessment results indicated the potential for worker contact with ENMs on workplace surfaces but did not identify nanoparticles readily dispersed in air during work tasks. Some increases in respirable particle concentrations were identified, but not consistently. Measured aerosol concentrations by number and mass were well below current reference values for poorly soluble low toxicity nanoparticles. From application and evaluation of qualitative risk assessment approaches, differences in control banding models and results were identified, although output generally agreed with conclusions from air sampling as to whether an upgrade in site engineering controls was recommended. This research helped to improve understanding of potential worker exposures to ENMs in CMP processes, as well as the methods for risk assessment and management of metal oxide nanoparticles in occupational environments.

  3. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically produced Atlas V engine that could also potentially satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article. The test article is scheduled to complete critical design review this fall and begin testing in 2017. Dynetics has also designed, developed, and built innovative tank and structure assemblies using friction stir welding to leverage recent NASA investments in manufacturing tools, facilities, and processes, significantly reducing development and recurring costs. The full-scale cryotank assembly was used to verify the structural design and prove affordable processes. Dynetics performed hydrostatic and cryothermal proof tests on the assembly to verify the assembly meets performance requirements. This paper will discuss the ABEDRR engine task and structures task achievements to date and the remaining effort through the end of the contract.

  4. Root Source Analysis/ValuStream[Trade Mark] - A Methodology for Identifying and Managing Risks

    NASA Technical Reports Server (NTRS)

    Brown, Richard Lee

    2008-01-01

    Root Source Analysis (RoSA) is a systems engineering methodology that has been developed at NASA over the past five years. It is designed to reduce costs, schedule, and technical risks by systematically examining critical assumptions and the state of the knowledge needed to bring to fruition the products that satisfy mission-driven requirements, as defined for each element of the Work (or Product) Breakdown Structure (WBS or PBS). This methodology is sometimes referred to as the ValuStream method, as inherent in the process is the linking and prioritizing of uncertainties arising from knowledge shortfalls directly to the customer's mission driven requirements. RoSA and ValuStream are synonymous terms. RoSA is not simply an alternate or improved method for identifying risks. It represents a paradigm shift. The emphasis is placed on identifying very specific knowledge shortfalls and assumptions that are the root sources of the risk (the why), rather than on assessing the WBS product(s) themselves (the what). In so doing RoSA looks forward to anticipate, identify, and prioritize knowledge shortfalls and assumptions that are likely to create significant uncertainties/ risks (as compared to Root Cause Analysis, which is most often used to look back to discover what was not known, or was assumed, that caused the failure). Experience indicates that RoSA, with its primary focus on assumptions and the state of the underlying knowledge needed to define, design, build, verify, and operate the products, can identify critical risks that historically have been missed by the usual approaches (i.e., design review process and classical risk identification methods). Further, the methodology answers four critical questions for decision makers and risk managers: 1. What s been included? 2. What's been left out? 3. How has it been validated? 4. Has the real source of the uncertainty/ risk been identified, i.e., is the perceived problem the real problem? Users of the RoSA methodology have characterized it as a true bottoms up risk assessment.

  5. Use of a systematic risk analysis method to improve safety in the production of paediatric parenteral nutrition solutions

    PubMed Central

    Bonnabry, P; Cingria, L; Sadeghipour, F; Ing, H; Fonzo-Christe, C; Pfister, R

    2005-01-01

    Background: Until recently, the preparation of paediatric parenteral nutrition formulations in our institution included re-transcription and manual compounding of the mixture. Although no significant clinical problems have occurred, re-engineering of this high risk activity was undertaken to improve its safety. Several changes have been implemented including new prescription software, direct recording on a server, automatic printing of the labels, and creation of a file used to pilot a BAXA MM 12 automatic compounder. The objectives of this study were to compare the risks associated with the old and new processes, to quantify the improved safety with the new process, and to identify the major residual risks. Methods: A failure modes, effects, and criticality analysis (FMECA) was performed by a multidisciplinary team. A cause-effect diagram was built, the failure modes were defined, and the criticality index (CI) was determined for each of them on the basis of the likelihood of occurrence, the severity of the potential effect, and the detection probability. The CIs for each failure mode were compared for the old and new processes and the risk reduction was quantified. Results: The sum of the CIs of all 18 identified failure modes was 3415 for the old process and 1397 for the new (reduction of 59%). The new process reduced the CIs of the different failure modes by a mean factor of 7. The CI was smaller with the new process for 15 failure modes, unchanged for two, and slightly increased for one. The greatest reduction (by a factor of 36) concerned re-transcription errors, followed by readability problems (by a factor of 30) and chemical cross contamination (by a factor of 10). The most critical steps in the new process were labelling mistakes (CI 315, maximum 810), failure to detect a dosage or product mistake (CI 288), failure to detect a typing error during the prescription (CI 175), and microbial contamination (CI 126). Conclusions: Modification of the process resulted in a significant risk reduction as shown by risk analysis. Residual failure opportunities were also quantified, allowing additional actions to be taken to reduce the risk of labelling mistakes. This study illustrates the usefulness of prospective risk analysis methods in healthcare processes. More systematic use of risk analysis is needed to guide continuous safety improvement of high risk activities. PMID:15805453

  6. Use of a systematic risk analysis method to improve safety in the production of paediatric parenteral nutrition solutions.

    PubMed

    Bonnabry, P; Cingria, L; Sadeghipour, F; Ing, H; Fonzo-Christe, C; Pfister, R E

    2005-04-01

    Until recently, the preparation of paediatric parenteral nutrition formulations in our institution included re-transcription and manual compounding of the mixture. Although no significant clinical problems have occurred, re-engineering of this high risk activity was undertaken to improve its safety. Several changes have been implemented including new prescription software, direct recording on a server, automatic printing of the labels, and creation of a file used to pilot a BAXA MM 12 automatic compounder. The objectives of this study were to compare the risks associated with the old and new processes, to quantify the improved safety with the new process, and to identify the major residual risks. A failure modes, effects, and criticality analysis (FMECA) was performed by a multidisciplinary team. A cause-effect diagram was built, the failure modes were defined, and the criticality index (CI) was determined for each of them on the basis of the likelihood of occurrence, the severity of the potential effect, and the detection probability. The CIs for each failure mode were compared for the old and new processes and the risk reduction was quantified. The sum of the CIs of all 18 identified failure modes was 3415 for the old process and 1397 for the new (reduction of 59%). The new process reduced the CIs of the different failure modes by a mean factor of 7. The CI was smaller with the new process for 15 failure modes, unchanged for two, and slightly increased for one. The greatest reduction (by a factor of 36) concerned re-transcription errors, followed by readability problems (by a factor of 30) and chemical cross contamination (by a factor of 10). The most critical steps in the new process were labelling mistakes (CI 315, maximum 810), failure to detect a dosage or product mistake (CI 288), failure to detect a typing error during the prescription (CI 175), and microbial contamination (CI 126). Modification of the process resulted in a significant risk reduction as shown by risk analysis. Residual failure opportunities were also quantified, allowing additional actions to be taken to reduce the risk of labelling mistakes. This study illustrates the usefulness of prospective risk analysis methods in healthcare processes. More systematic use of risk analysis is needed to guide continuous safety improvement of high risk activities.

  7. Analyzing system safety in lithium-ion grid energy storage

    DOE PAGES

    Rosewater, David; Williams, Adam

    2015-10-08

    As grid energy storage systems become more complex, it grows more di cult to design them for safe operation. This paper first reviews the properties of lithium-ion batteries that can produce hazards in grid scale systems. Then the conventional safety engineering technique Probabilistic Risk Assessment (PRA) is reviewed to identify its limitations in complex systems. To address this gap, new research is presented on the application of Systems-Theoretic Process Analysis (STPA) to a lithium-ion battery based grid energy storage system. STPA is anticipated to ll the gaps recognized in PRA for designing complex systems and hence be more e ectivemore » or less costly to use during safety engineering. It was observed that STPA is able to capture causal scenarios for accidents not identified using PRA. Additionally, STPA enabled a more rational assessment of uncertainty (all that is not known) thereby promoting a healthy skepticism of design assumptions. Lastly, we conclude that STPA may indeed be more cost effective than PRA for safety engineering in lithium-ion battery systems. However, further research is needed to determine if this approach actually reduces safety engineering costs in development, or improves industry safety standards.« less

  8. Engine systems analysis results of the Space Shuttle Main Engine redesigned powerhead initial engine level testing

    NASA Technical Reports Server (NTRS)

    Sander, Erik J.; Gosdin, Dennis R.

    1992-01-01

    Engineers regularly analyze SSME ground test and flight data with respect to engine systems performance. Recently, a redesigned SSME powerhead was introduced to engine-level testing in part to increase engine operational margins through optimization of the engine internal environment. This paper presents an overview of the MSFC personnel engine systems analysis results and conclusions reached from initial engine level testing of the redesigned powerhead, and further redesigns incorporated to eliminate accelerated main injector baffle and main combustion chamber hot gas wall degradation. The conclusions are drawn from instrumented engine ground test data and hardware integrity analysis reports and address initial engine test results with respect to the apparent design change effects on engine system and component operation.

  9. The MSFC Systems Engineering Guide: An Overview and Plan

    NASA Technical Reports Server (NTRS)

    Shelby, Jerry A.; Thomas, L. Dale

    2007-01-01

    As systems and subsystems requirements become more complex in the pursuit of the exploration of space, advanced technology will demand and require an integrated approach to the design and development of safe and successful space vehicles and there products. System engineers play a vital and key role in transforming mission needs into vehicle requirements that can be verified and validated. This will result in a safe and cost effective design that will satisfy the mission schedule. A key to successful vehicle design within systems engineering is communication. Communication, through a systems engineering infrastructure, will not only ensure that customers and stakeholders are satisfied but will also assist in identifying vehicle requirements; i.e. identification, integration and management. This vehicle design will produce a system that is verifiable, traceable, and effectively satisfies cost, schedule, performance, and risk throughout the life-cycle of the product. A communication infrastructure will bring about the integration of different engineering disciplines within vehicle design. A system utilizing these aspects will enhance system engineering performance and improve upon required activities such as Development of Requirements, Requirements Management, Functional Analysis, Test, Synthesis, Trade Studies, Documentation, and Lessons Learned to produce a successful final product. This paper will describe the guiding vision, progress to date and the plan forward for development of the Marshall Space Flight Center (MSFC) Systems Engineering Guide (SEG), a virtual systems engineering handbook and archive that will describe the system engineering processes that are used by MSFC in the development of complex systems such as the Ares launch vehicle. It is the intent of this website to be a "One Stop Shop" for our systems engineers that will provide tutorial information, an overview of processes and procedures and links to assist system engineering with guidance and references, and provide an archive of systems engineering artifacts produced by the many NASA projects developed and managed by MSFC over the years.

  10. Apollo experience report: Safety activities

    NASA Technical Reports Server (NTRS)

    Rice, C. N.

    1975-01-01

    A description is given of the flight safety experiences gained during the Apollo Program and safety, from the viewpoint of program management, engineering, mission planning, and ground test operations was discussed. Emphasis is placed on the methods used to identify the risks involved in flight and in certain ground test operations. In addition, there are discussions on the management and engineering activities used to eliminate or reduce these risks.

  11. Engineering Decisions Under Risk-Averseness

    DTIC Science & Technology

    2014-12-19

    ENGINEERING DECISIONS UNDER RISK-AVERSENESS∗ R. Tyrrell Rockafellar Johannes O. Royset Department of Mathematics Operations Research Department...based upon work supported in part by the U. S. Air Force Office of Scientific Research under grants FA9550-11-1-0206 and F1ATAO1194GOO1. 1 Report...S) AND ADDRESS(ES) Naval Postgraduate School,Operations Research Department,Monterey,CA,93943 8. PERFORMING ORGANIZATION REPORT NUMBER 9

  12. New "Risk-Targeted" Seismic Maps Introduced into Building Codes

    USGS Publications Warehouse

    Luco, Nicholas; Garrett, B.; Hayes, J.

    2012-01-01

    Throughout most municipalities of the United States, structural engineers design new buildings using the U.S.-focused International Building Code (IBC). Updated editions of the IBC are published every 3 years. The latest edition (2012) contains new "risk-targeted maximum considered earthquake" (MCER) ground motion maps, which are enabling engineers to incorporate a more consistent and better defined level of seismic safety into their building designs.

  13. Information technology security system engineering methodology

    NASA Technical Reports Server (NTRS)

    Childs, D.

    2003-01-01

    A methodology is described for system engineering security into large information technology systems under development. The methodology is an integration of a risk management process and a generic system development life cycle process. The methodology is to be used by Security System Engineers to effectively engineer and integrate information technology security into a target system as it progresses through the development life cycle. The methodology can also be used to re-engineer security into a legacy system.

  14. Considerations in Starting Climate Change Research

    NASA Astrophysics Data System (ADS)

    Long, J. C. S.; Morgan, G.; Hamburg, S.; Winickoff, D. E.

    2014-12-01

    Many have called for climate engineering research because the growing risks of climate change and the geopolitical and national security risks of climate remediation technologies are real. As the topic of climate engineering remains highly controversial, national funding agencies should evaluate even modest outdoor climate engineering research proposals with respect to societal, legal, and risk considerations in making a decision to fund or not to fund. These concerns will be extremely difficult to coordinate internationally if they are not first considered successfully on a national basis. Assessment of a suite of proposed research projects with respect to these considerations indicates we would learn valuable lessons about how to govern research by initiating a few exemplar projects. The first time an issue arrives it can be very helpful if it there are specific cases, not a broad class of projects. A good first case should be defensible and understandable, fit within the general mandate of existing research programs, have negligible physical risk, small physical scale and short duration. By focusing on a specific case, the discussion can be held with limits and help to establish some track record in dealing with a controversial subject and developing a process for assigning appropriate scrutiny and outreach. Even at an early stage, with low risk, small-scale experiments, obtaining broad-based advice will aid in dealing with the controversies. An independent advisory body can provide guidance about a wide spectrum of physical and social risks of funding the experiment compared to societal benefit of gaining understanding. Clearly identifying the research as climate engineering research avoids sending research down a path that might violate public trust and provide an important opportunity to grow governance and public engagement at an early stage. Climate engineering research should be seen in the context of all approaches to dealing with the climate problem. Much of climate-engineering research will inspire investigators to address significant and difficult problems in climate science. US research programs should use this fact for societal benefit. Agencies should assess the early research and use the assessment to make decisions about how to, or not to, proceed.

  15. Aquatic Nuisance Species in the Great Lakes and Mississippi River Basin—A Risk Assessment in Support of GLMRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grippo, Mark A.; Hlohowskyj, Ihor; Fox, Laura

    The U.S. Army Corps of Engineers (USACE) is conducting the Great Lakes and Mississippi River Interbasin Study (GLMRIS) to determine the aquatic nuisance species (ANS) currently established in either the Mississippi River Basin (MRB) or the Great Lakes Basin (GLB) that pose the greatest risk to the other basin. The GLRMIS study focuses specifically on ANS transfer through the Chicago Area Waterway System (CAWS), a multi-use waterway connecting the two basins. In support of GLMRIS, we conducted a qualitative risk assessment for 34 ANS in which we determined overall risk level for four time intervals over a 50-year period ofmore » analysis based on the probability of ANS establishing in a new basin and the environmental, economic, and sociopolitical consequences of their establishment. Probability of establishment and consequences of establishment were assigned qualitative ratings of high, medium, or low and establishment and consequence ratings were then combined into an overall risk rating. Over the 50-year period of analysis, seven species were characterized as posing a medium risk and two species as posing a high risk to the MRB. Three species were characterized as posing a medium risk to the GLB, but no high-risk species were identified for this basin. Based on the time frame in which these species were considered likely to establish in the new basin, risk increased over time for some ANS. Identifying and prioritizing ANS risk supported the development and evaluation of multiple control alternatives that could reduce the probability of interbasin ANS transfer. However, both species traits and the need to balance multiple uses of the CAWS make it difficult to design cost-efficient and socially acceptable controls to reduce the probability of ANS transfer between the two basins.« less

  16. 2015 Space Human Factors Engineering Standing Review Panel

    NASA Technical Reports Server (NTRS)

    Steinberg, Susan

    2015-01-01

    The 2015 Space Human Factors Engineering (SHFE) Standing Review Panel (from here on referred to as the SRP) met for a site visit in Houston, TX on December 2 - 3, 2015. The SRP reviewed the updated research plans for the Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI Risk), the Risk of Inadequate Human-Computer Interaction (HCI Risk), and the Risk of Inadequate Mission, Process and Task Design (MPTask Risk). The SRP also received a status update on the Risk of Incompatible Vehicle/Habitat Design (Hab Risk) and the Risk of Performance Errors Due to Training Deficiencies (Train Risk). The SRP is pleased with the progress and responsiveness of the SHFE team. The presentations were much improved this year. The SRP is also pleased with the human-centered design approach. Below are some of the more extensive comments from the SRP. We have also made comments in each section concerning gaps/tasks in each. The comments below reflect more significant changes that impact more than just one particular section.

  17. Continuous Risk Management Course. Revised

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore F.

    1999-01-01

    This document includes a course plan for Continuous Risk Management taught by the Software Assurance Technology Center along with the Continuous Risk Management Guidebook of the Software Engineering Institute of Carnegie Mellon University and a description of Continuous Risk Management at NASA.

  18. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Alfonsi; C. Rabiti; D. Mandelli

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less

  20. Risk-informed Maintenance for Non-coherent Systems

    NASA Astrophysics Data System (ADS)

    Tao, Ye

    Probabilistic Safety Assessment (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. The information provided by PSA has been increasingly implemented for regulatory purposes but rarely used in providing information for operation and maintenance activities. As one of the key parts in PSA, Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering and biological systems. The fault trees are composed of logic diagrams that display the state of the system and are constructed using graphical design techniques. Risk Importance Measures (RIMs) are information that can be obtained from both qualitative and quantitative aspects of FTA. Components within a system can be ranked with respect to each specific criterion defined by each RIM. Through a RIM, a ranking of the components or basic events can be obtained and provide valuable information for risk-informed decision making. Various RIMs have been applied in various applications. In order to provide a thorough understanding of RIMs and interpret the results, they are categorized with respect to risk significance (RS) and safety significance (SS) in this thesis. This has also tied them into different maintenance activities. When RIMs are used for maintenance purposes, it is called risk-informed maintenance. On the other hand, the majority of work produced on the FTA method has been concentrated on failure logic diagrams restricted to the direct or implied use of AND and OR operators. Such systems are considered as coherent systems. However, the NOT logic can also contribute to the information produced by PSA. The importance analysis of non-coherent systems is rather limited, even though the field has received more and more attention over the years. The non-coherent systems introduce difficulties in both qualitative and quantitative assessment of the fault tree compared with the coherent systems. In this thesis, a set of RIMs is analyzed and investigated. The 8 commonly used RIMs (Birnbaum's Measure, Criticality Importance Factor, Fussell-Vesely Measure, Improvement Potential, Conditional Probability, Risk Achievement, Risk Achievement Worth, and Risk Reduction Worth) are extended to non-coherent forms. Both coherent and non-coherent forms are classified into different categories in order to assist different types of maintenance activities. The real systems such as the Steam Generator Level Control System in CANDU Nuclear Power Plant (NPP), a Gas Detection System, and the Automatic Power Control System of the experimental nuclear reactor are presented to demonstrate the application of the results as case studies.

Top