Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., ``Verification, Validation, Reviews, and Audits for Digital Computer Software used in Safety Systems of Nuclear... NRC regulations promoting the development of, and compliance with, software verification and...
Projected Impact of Compositional Verification on Current and Future Aviation Safety Risk
NASA Technical Reports Server (NTRS)
Reveley, Mary S.; Withrow, Colleen A.; Leone, Karen M.; Jones, Sharon M.
2014-01-01
The projected impact of compositional verification research conducted by the National Aeronautic and Space Administration System-Wide Safety and Assurance Technologies on aviation safety risk was assessed. Software and compositional verification was described. Traditional verification techniques have two major problems: testing at the prototype stage where error discovery can be quite costly and the inability to test for all potential interactions leaving some errors undetected until used by the end user. Increasingly complex and nondeterministic aviation systems are becoming too large for these tools to check and verify. Compositional verification is a "divide and conquer" solution to addressing increasingly larger and more complex systems. A review of compositional verification research being conducted by academia, industry, and Government agencies is provided. Forty-four aviation safety risks in the Biennial NextGen Safety Issues Survey were identified that could be impacted by compositional verification and grouped into five categories: automation design; system complexity; software, flight control, or equipment failure or malfunction; new technology or operations; and verification and validation. One capability, 1 research action, 5 operational improvements, and 13 enablers within the Federal Aviation Administration Joint Planning and Development Office Integrated Work Plan that could be addressed by compositional verification were identified.
Hard and Soft Safety Verifications
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Anderson, Brenda
2012-01-01
The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.
Investigation of a Verification and Validation Tool with a Turbofan Aircraft Engine Application
NASA Technical Reports Server (NTRS)
Uth, Peter; Narang-Siddarth, Anshu; Wong, Edmond
2018-01-01
The development of more advanced control architectures for turbofan aircraft engines can yield gains in performance and efficiency over the lifetime of an engine. However, the implementation of these increasingly complex controllers is contingent on their ability to provide safe, reliable engine operation. Therefore, having the means to verify the safety of new control algorithms is crucial. As a step towards this goal, CoCoSim, a publicly available verification tool for Simulink, is used to analyze C-MAPSS40k, a 40,000 lbf class turbo-fan engine model developed at NASA for testing new control algorithms. Due to current limitations of the verification software, several modifications are made to C-MAPSS40k to achieve compatibility with CoCoSim. Some of these modifications sacrifice fidelity to the original model. Several safety and performance requirements typical for turbofan engines are identified and constructed into a verification framework. Preliminary results using an industry standard baseline controller for these requirements are presented. While verification capabilities are demonstrated, a truly comprehensive analysis will require further development of the verification tool.
The NASA Commercial Crew Program (CCP) Mission Assurance Process
NASA Technical Reports Server (NTRS)
Canfield, Amy
2016-01-01
In 2010, NASA established the Commercial Crew Program in order to provide human access to the International Space Station and low earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine the commercial providers transportation system complies with Programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted Hazard Reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100 percent of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (SMA) model does not support the nature of the Commercial Crew Program. To that end, NASA SMA is implementing a Risk Based Assurance (RBA) process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications. This paper will describe the evolution of the CCP Mission Assurance process from the beginning of the Program to its current incarnation. Topics to be covered include a short history of the CCP; the development of the Programmatic mission assurance requirements; the current safety review process; a description of the RBA process and its products and ending with a description of the Shared Assurance Model.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
The Evolution of the NASA Commercial Crew Program Mission Assurance Process
NASA Technical Reports Server (NTRS)
Canfield, Amy C.
2016-01-01
In 2010, the National Aeronautics and Space Administration (NASA) established the Commercial Crew Program (CCP) in order to provide human access to the International Space Station and low Earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine that the Commercial Provider's transportation system complies with programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted hazard reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100% of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (S&MA) model does not support the nature of the CCP. To that end, NASA S&MA is implementing a Risk Based Assurance process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications.
Development of a software safety process and a case study of its use
NASA Technical Reports Server (NTRS)
Knight, John C.
1993-01-01
The goal of this research is to continue the development of a comprehensive approach to software safety and to evaluate the approach with a case study. The case study is a major part of the project, and it involves the analysis of a specific safety-critical system from the medical equipment domain. The particular application being used was selected because of the availability of a suitable candidate system. We consider the results to be generally applicable and in no way particularly limited by the domain. The research is concentrating on issues raised by the specification and verification phases of the software lifecycle since they are central to our previously-developed rigorous definitions of software safety. The theoretical research is based on our framework of definitions for software safety. In the area of specification, the main topics being investigated are the development of techniques for building system fault trees that correctly incorporate software issues and the development of rigorous techniques for the preparation of software safety specifications. The research results are documented. Another area of theoretical investigation is the development of verification methods tailored to the characteristics of safety requirements. Verification of the correct implementation of the safety specification is central to the goal of establishing safe software. The empirical component of this research is focusing on a case study in order to provide detailed characterizations of the issues as they appear in practice, and to provide a testbed for the evaluation of various existing and new theoretical results, tools, and techniques. The Magnetic Stereotaxis System is summarized.
Automated Installation Verification of COMSOL via LiveLink for MATLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowell, Michael W
Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less
Code of Federal Regulations, 2012 CFR
2012-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
DOT National Transportation Integrated Search
2012-03-01
The Federal Motor Vehicle Safety Standards (FMVSS) establish minimum levels for vehicle safety, and manufacturers of motor vehicle and equipment items must comply with these standards. The National Highway Traffic Safety Administration (NHTSA) contra...
NASA Technical Reports Server (NTRS)
Neogi, Natasha A.
2016-01-01
There is a current drive towards enabling the deployment of increasingly autonomous systems in the National Airspace System (NAS). However, shifting the traditional roles and responsibilities between humans and automation for safety critical tasks must be managed carefully, otherwise the current emergent safety properties of the NAS may be disrupted. In this paper, a verification activity to assess the emergent safety properties of a clearly defined, safety critical, operational scenario that possesses tasks that can be fluidly allocated between human and automated agents is conducted. Task allocation role sets were proposed for a human-automation team performing a contingency maneuver in a reduced crew context. A safety critical contingency procedure (engine out on takeoff) was modeled in the Soar cognitive architecture, then translated into the Hybrid Input Output formalism. Verification activities were then performed to determine whether or not the safety properties held over the increasingly autonomous system. The verification activities lead to the development of several key insights regarding the implicit assumptions on agent capability. It subsequently illustrated the usefulness of task annotations associated with specialized requirements (e.g., communication, timing etc.), and demonstrated the feasibility of this approach.
78 FR 32010 - Pipeline Safety: Public Workshop on Integrity Verification Process
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-28
.... PHMSA-2013-0119] Pipeline Safety: Public Workshop on Integrity Verification Process AGENCY: Pipeline and... announcing a public workshop to be held on the concept of ``Integrity Verification Process.'' The Integrity Verification Process shares similar characteristics with fitness for service processes. At this workshop, the...
78 FR 56268 - Pipeline Safety: Public Workshop on Integrity Verification Process, Comment Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
.... PHMSA-2013-0119] Pipeline Safety: Public Workshop on Integrity Verification Process, Comment Extension... public workshop on ``Integrity Verification Process'' which took place on August 7, 2013. The notice also sought comments on the proposed ``Integrity Verification Process.'' In response to the comments received...
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
1981-03-01
overcome the shortcomings of this system. A phase III study develops the breakup model of the Space Shuttle clus’ter at various times into flight. The...2-1 ROCKET MODEL ..................................................... 2-5 COMBUSTION CHAMBER OPERATION ................................... 2-5...2-19 RESULTS .......................................................... 2-22 ROCKET MODEL
Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution
NASA Astrophysics Data System (ADS)
Hwang, Inseok
The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2012 CFR
2012-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2014 CFR
2014-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2013 CFR
2013-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2011 CFR
2011-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
Zhu, Ling-Ling; Lv, Na; Zhou, Quan
2016-12-01
We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.
Guidelines for mission integration, a summary report
NASA Technical Reports Server (NTRS)
1979-01-01
Guidelines are presented for instrument/experiment developers concerning hardware design, flight verification, and operations and mission implementation requirements. Interface requirements between the STS and instruments/experiments are defined. Interface constraints and design guidelines are presented along with integrated payload requirements for Spacelab Missions 1, 2, and 3. Interim data are suggested for use during hardware development until more detailed information is developed when a complete mission and an integrated payload system are defined. Safety requirements, flight verification requirements, and operations procedures are defined.
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
77 FR 26822 - Pipeline Safety: Verification of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0068] Pipeline Safety: Verification of Records AGENCY: Pipeline and Hazardous Materials... issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities to verify...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... are engineers. UL today is comprised of five businesses, Product Safety, Verification Services, Life..., Director--Global Technical Research, UL Verification Services. Subscribed and sworn to before me this 20... (431.447(c)(4)) General Personnel Overview UL is a global independent safety science company with more...
International Space Station Requirement Verification for Commercial Visiting Vehicles
NASA Technical Reports Server (NTRS)
Garguilo, Dan
2017-01-01
The COTS program demonstrated NASA could rely on commercial providers for safe, reliable, and cost-effective cargo delivery to ISS. The ISS Program has developed a streamlined process to safely integrate commercial visiting vehicles and ensure requirements are met Levy a minimum requirement set (down from 1000s to 100s) focusing on the ISS interface and safety, reducing the level of NASA oversight/insight and burden on the commercial Partner. Partners provide a detailed verification and validation plan documenting how they will show they've met NASA requirements. NASA conducts process sampling to ensure that the established verification processes is being followed. NASA participates in joint verification events and analysis for requirements that require both parties verify. Verification compliance is approved by NASA and launch readiness certified at mission readiness reviews.
78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...
Hydrogen and Storage Initiatives at the NASA JSC White Sands Test Facility
NASA Technical Reports Server (NTRS)
Maes, Miguel; Woods, Stephen S.
2006-01-01
NASA WSTF Hydrogen Activities: a) Aerospace Test; b) System Certification & Verification; c) Component, System, & Facility Hazard Assessment; d) Safety Training Technical Transfer: a) Development of Voluntary Consensus Standards and Practices; b) Support of National Hydrogen Infrastructure Development.
Orion GN&C Fault Management System Verification: Scope And Methodology
NASA Technical Reports Server (NTRS)
Brown, Denise; Weiler, David; Flanary, Ronald
2016-01-01
In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.
Development of photovoltaic array and module safety requirements
NASA Technical Reports Server (NTRS)
1982-01-01
Safety requirements for photovoltaic module and panel designs and configurations likely to be used in residential, intermediate, and large-scale applications were identified and developed. The National Electrical Code and Building Codes were reviewed with respect to present provisions which may be considered to affect the design of photovoltaic modules. Limited testing, primarily in the roof fire resistance field was conducted. Additional studies and further investigations led to the development of a proposed standard for safety for flat-plate photovoltaic modules and panels. Additional work covered the initial investigation of conceptual approaches and temporary deployment, for concept verification purposes, of a differential dc ground-fault detection circuit suitable as a part of a photovoltaic array safety system.
9 CFR 417.4 - Validation, Verification, Reassessment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... not have a HACCP plan because a hazard analysis has revealed no food safety hazards that are.... 417.4 Section 417.4 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.4 Validation, Verification...
9 CFR 417.4 - Validation, Verification, Reassessment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... not have a HACCP plan because a hazard analysis has revealed no food safety hazards that are.... 417.4 Section 417.4 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.4 Validation, Verification...
9 CFR 417.4 - Validation, Verification, Reassessment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... not have a HACCP plan because a hazard analysis has revealed no food safety hazards that are.... 417.4 Section 417.4 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.4 Validation, Verification...
Test load verification through strain data analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1995-01-01
A traditional binding acceptance criterion on polycrystalline structures is the experimental verification of the ultimate factor of safety. At fracture, the induced strain is inelastic and about an order-of-magnitude greater than designed for maximum expected operational limit. At this extreme strained condition, the structure may rotate and displace at the applied verification load such as to unknowingly distort the load transfer into the static test article. Test may result in erroneously accepting a submarginal design or rejecting a reliable one. A technique was developed to identify, monitor, and assess the load transmission error through two back-to-back surface-measured strain data. The technique is programmed for expediency and convenience. Though the method was developed to support affordable aerostructures, the method is also applicable for most high-performance air and surface transportation structural systems.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
Safety Verification of the Small Aircraft Transportation System Concept of Operations
NASA Technical Reports Server (NTRS)
Carreno, Victor; Munoz, Cesar
2005-01-01
A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant.
Structural verification for GAS experiments
NASA Technical Reports Server (NTRS)
Peden, Mark Daniel
1992-01-01
The purpose of this paper is to assist the Get Away Special (GAS) experimenter in conducting a thorough structural verification of its experiment structural configuration, thus expediting the structural review/approval process and the safety process in general. Material selection for structural subsystems will be covered with an emphasis on fasteners (GSFC fastener integrity requirements) and primary support structures (Stress Corrosion Cracking requirements and National Space Transportation System (NSTS) requirements). Different approaches to structural verifications (tests and analyses) will be outlined especially those stemming from lessons learned on load and fundamental frequency verification. In addition, fracture control will be covered for those payloads that utilize a door assembly or modify the containment provided by the standard GAS Experiment Mounting Plate (EMP). Structural hazard assessment and the preparation of structural hazard reports will be reviewed to form a summation of structural safety issues for inclusion in the safety data package.
Formal verification of software-based medical devices considering medical guidelines.
Daw, Zamira; Cleaveland, Rance; Vetter, Marcus
2014-01-01
Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.
NASA Technical Reports Server (NTRS)
Davis, Robert E.
2002-01-01
The presentation provides an overview of requirement and interpretation letters, mechanical systems safety interpretation letter, design and verification provisions, and mechanical systems verification plan.
NASA Astrophysics Data System (ADS)
Arndt, J.; Kreimer, J.
2010-09-01
The European Space Laboratory COLUMBUS was launched in February 2008 with NASA Space Shuttle Atlantis. Since successful docking and activation this manned laboratory forms part of the International Space Station(ISS). Depending on the objectives of the Mission Increments the on-orbit configuration of the COLUMBUS Module varies with each increment. This paper describes the end-to-end verification which has been implemented to ensure safe operations under the condition of a changing on-orbit configuration. That verification process has to cover not only the configuration changes as foreseen by the Mission Increment planning but also those configuration changes on short notice which become necessary due to near real-time requests initiated by crew or Flight Control, and changes - most challenging since unpredictable - due to on-orbit anomalies. Subject of the safety verification is on one hand the on orbit configuration itself including the hardware and software products, on the other hand the related Ground facilities needed for commanding of and communication to the on-orbit System. But also the operational products, e.g. the procedures prepared for crew and ground control in accordance to increment planning, are subject of the overall safety verification. In order to analyse the on-orbit configuration for potential hazards and to verify the implementation of the related Safety required hazard controls, a hierarchical approach is applied. The key element of the analytical safety integration of the whole COLUMBUS Payload Complement including hardware owned by International Partners is the Integrated Experiment Hazard Assessment(IEHA). The IEHA especially identifies those hazardous scenarios which could potentially arise through physical and operational interaction of experiments. A major challenge is the implementation of a Safety process which owns quite some rigidity in order to provide reliable verification of on-board Safety and which likewise provides enough flexibility which is desired by manned space operations with scientific objectives. In the period of COLUMBUS operations since launch already a number of lessons learnt could be implemented especially in the IEHA that allow to improve the flexibility of on-board operations without degradation of Safety.
NASA Technical Reports Server (NTRS)
1995-01-01
The Formal Methods Specification and Verification Guidebook for Software and Computer Systems describes a set of techniques called Formal Methods (FM), and outlines their use in the specification and verification of computer systems and software. Development of increasingly complex systems has created a need for improved specification and verification techniques. NASA's Safety and Mission Quality Office has supported the investigation of techniques such as FM, which are now an accepted method for enhancing the quality of aerospace applications. The guidebook provides information for managers and practitioners who are interested in integrating FM into an existing systems development process. Information includes technical and administrative considerations that must be addressed when establishing the use of FM on a specific project. The guidebook is intended to aid decision makers in the successful application of FM to the development of high-quality systems at reasonable cost. This is the first volume of a planned two-volume set. The current volume focuses on administrative and planning considerations for the successful application of FM.
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
2010-01-01
Loss of control remains one of the largest contributors to aircraft fatal accidents worldwide. Aircraft loss-of-control accidents are highly complex in that they can result from numerous causal and contributing factors acting alone or (more often) in combination. Hence, there is no single intervention strategy to prevent these accidents and reducing them will require a holistic integrated intervention capability. Future onboard integrated system technologies developed for preventing loss of vehicle control accidents must be able to assure safe operation under the associated off-nominal conditions. The transition of these technologies into the commercial fleet will require their extensive validation and verification (V and V) and ultimate certification. The V and V of complex integrated systems poses major nontrivial technical challenges particularly for safety-critical operation under highly off-nominal conditions associated with aircraft loss-of-control events. This paper summarizes the V and V problem and presents a proposed process that could be applied to complex integrated safety-critical systems developed for preventing aircraft loss-of-control accidents. A summary of recent research accomplishments in this effort is also provided.
Development of a Hand Held Thromboelastograph
2015-01-01
documents will be referenced during the Entegrion PCM System design, verification and validation activities. EN 61010 -1:2010 (Edition3.0) Safety...requirements for electrical equipment for measurement, control, and laboratory use – Part 1: General requirements. EN 61010 -2-101:2002 Safety...IPC-A-610E Acceptability of Electronic Assemblies IPC 7711/21B Rework, Modification and Repair of Electronic Assemblies. IEC 62304:2006/AC:2008
Proceedings of the Sixth NASA Langley Formal Methods (LFM) Workshop
NASA Technical Reports Server (NTRS)
Rozier, Kristin Yvonne (Editor)
2008-01-01
Today's verification techniques are hard-pressed to scale with the ever-increasing complexity of safety critical systems. Within the field of aeronautics alone, we find the need for verification of algorithms for separation assurance, air traffic control, auto-pilot, Unmanned Aerial Vehicles (UAVs), adaptive avionics, automated decision authority, and much more. Recent advances in formal methods have made verifying more of these problems realistic. Thus we need to continually re-assess what we can solve now and identify the next barriers to overcome. Only through an exchange of ideas between theoreticians and practitioners from academia to industry can we extend formal methods for the verification of ever more challenging problem domains. This volume contains the extended abstracts of the talks presented at LFM 2008: The Sixth NASA Langley Formal Methods Workshop held on April 30 - May 2, 2008 in Newport News, Virginia, USA. The topics of interest that were listed in the call for abstracts were: advances in formal verification techniques; formal models of distributed computing; planning and scheduling; automated air traffic management; fault tolerance; hybrid systems/hybrid automata; embedded systems; safety critical applications; safety cases; accident/safety analysis.
Verification of MCNP6.2 for Nuclear Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-05-10
Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less
10 CFR 300.11 - Independent verification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... DEPARTMENT OF ENERGY CLIMATE CHANGE VOLUNTARY GREENHOUSE GAS REPORTING PROGRAM: GENERAL GUIDELINES § 300.11..., Health and Safety Auditor Certification: California Climate Action Registry; Clean Development Mechanism... statements (or lack thereof) of any significant changes in entity boundaries, products, or processes; (iii...
10 CFR 300.11 - Independent verification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... DEPARTMENT OF ENERGY CLIMATE CHANGE VOLUNTARY GREENHOUSE GAS REPORTING PROGRAM: GENERAL GUIDELINES § 300.11..., Health and Safety Auditor Certification: California Climate Action Registry; Clean Development Mechanism... statements (or lack thereof) of any significant changes in entity boundaries, products, or processes; (iii...
10 CFR 300.11 - Independent verification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... DEPARTMENT OF ENERGY CLIMATE CHANGE VOLUNTARY GREENHOUSE GAS REPORTING PROGRAM: GENERAL GUIDELINES § 300.11..., Health and Safety Auditor Certification: California Climate Action Registry; Clean Development Mechanism... statements (or lack thereof) of any significant changes in entity boundaries, products, or processes; (iii...
Validation and verification of the laser range safety tool (LRST)
NASA Astrophysics Data System (ADS)
Kennedy, Paul K.; Keppler, Kenneth S.; Thomas, Robert J.; Polhamus, Garrett D.; Smith, Peter A.; Trevino, Javier O.; Seaman, Daniel V.; Gallaway, Robert A.; Crockett, Gregg A.
2003-06-01
The U.S. Dept. of Defense (DOD) is currently developing and testing a number of High Energy Laser (HEL) weapons systems. DOD range safety officers now face the challenge of designing safe methods of testing HEL's on DOD ranges. In particular, safety officers need to ensure that diffuse and specular reflections from HEL system targets, as well as direct beam paths, are contained within DOD boundaries. If both the laser source and the target are moving, as they are for the Airborne Laser (ABL), a complex series of calculations is required and manual calculations are impractical. Over the past 5 years, the Optical Radiation Branch of the Air Force Research Laboratory (AFRL/HEDO), the ABL System Program Office, Logicon-RDA, and Northrup-Grumman, have worked together to develop a computer model called teh Laser Range Safety Tool (LRST), specifically designed for HEL reflection hazard analyses. The code, which is still under development, is currently tailored to support the ABL program. AFRL/HEDO has led an LRST Validation and Verification (V&V) effort since 1998, in order to determine if code predictions are accurate. This paper summarizes LRST V&V efforts to date including: i) comparison of code results with laboratory measurements of reflected laser energy and with reflection measurements made during actual HEL field tests, and ii) validation of LRST's hazard zone computations.
Verification and Implementation of Operations Safety Controls for Flight Missions
NASA Technical Reports Server (NTRS)
Jones, Cheryl L.; Smalls, James R.; Carrier, Alicia S.
2010-01-01
Approximately eleven years ago, the International Space Station launched the first module from Russia, the Functional Cargo Block (FGB). Safety and Mission Assurance (S&MA) Operations (Ops) Engineers played an integral part in that endeavor by executing strict flight product verification as well as continued staffing of S&MA's console in the Mission Evaluation Room (MER) for that flight mission. How were these engineers able to conduct such a complicated task? They conducted it based on product verification that consisted of ensuring that safety requirements were adequately contained in all flight products that affected crew safety. S&MA Ops engineers apply both systems engineering and project management principles in order to gain a appropriate level of technical knowledge necessary to perform thorough reviews which cover the subsystem(s) affected. They also ensured that mission priorities were carried out with a great detail and success.
2013-09-01
to a XML file, a code that Bonine in [21] developed for a similar purpose. Using the StateRover XML log file import tool, we are able to generate a...C. Bonine , M. Shing, T.W. Otani, “Computer-aided process and tools for mobile software acquisition,” NPS, Monterey, CA, Tech. Rep. NPS-SE-13...C10P07R05– 075, 2013. [21] C. Bonine , “Specification, validation and verification of mobile application behavior,” M.S. thesis, Dept. Comp. Science, NPS
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Towards composition of verified hardware devices
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, G. C.
1991-01-01
Computers are being used where no affordable level of testing is adequate. Safety and life critical systems must find a replacement for exhaustive testing to guarantee their correctness. Through a mathematical proof, hardware verification research has focused on device verification and has largely ignored system composition verification. To address these deficiencies, we examine how the current hardware verification methodology can be extended to verify complete systems.
Verification and Validation for Flight-Critical Systems (VVFCS)
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Jacobsen, Robert A.
2010-01-01
On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Towards Verification of Operational Procedures Using Auto-Generated Diagnostic Trees
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Lutz, Robyn; Patterson-Hine, Ann
2009-01-01
The design, development, and operation of complex space, lunar and planetary exploration systems require the development of general procedures that describe a detailed set of instructions capturing how mission tasks are performed. For both crewed and uncrewed NASA systems, mission safety and the accomplishment of the scientific mission objectives are highly dependent on the correctness of procedures. In this paper, we describe how to use the auto-generated diagnostic trees from existing diagnostic models to improve the verification of standard operating procedures. Specifically, we introduce a systematic method, namely the Diagnostic Tree for Verification (DTV), developed with the goal of leveraging the information contained within auto-generated diagnostic trees in order to check the correctness of procedures, to streamline the procedures in terms of reducing the number of steps or use of resources in them, and to propose alternative procedural steps adaptive to changing operational conditions. The application of the DTV method to a spacecraft electrical power system shows the feasibility of the approach and its range of capabilities
Software safety - A user's practical perspective
NASA Technical Reports Server (NTRS)
Dunn, William R.; Corliss, Lloyd D.
1990-01-01
Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.
Real-time logic modelling on SpaceWire
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Ma, Yunpeng; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. However, it cannot meet the deterministic requirement for safety/time critical application in spacecraft, where the delay of real-time (RT) message streams must be guaranteed. Therefore, SpaceWire-D is developed that provides deterministic delivery over a SpaceWire network. Formal analysis and verification of real-time systems is critical to their development and safe implementation, and is a prerequisite for obtaining their safety certification. Failure to meet specified timing constraints such as deadlines in hard real-time systems may lead to catastrophic results. In this paper, a formal verification method, Real-Time Logic (RTL), has been proposed to specify and verify timing properties of SpaceWire-D network. Based on the principal of SpaceWire-D protocol, we firstly analyze the timing properties of fundamental transactions, such as RMAP WRITE, and RMAP READ. After that, the RMAP WRITE transaction structure is modeled in Real-Time Logic (RTL) and Presburger Arithmetic representations. And then, the associated constraint graph and safety analysis is provided. Finally, it is suggested that RTL method can be useful for the protocol evaluation and provision of recommendation for further protocol evolutions.
Structural Safety of a Hubble Space Telescope Science Instrument
NASA Technical Reports Server (NTRS)
Lou, M. C.; Brent, D. N.
1993-01-01
This paper gives an overview of safety requirements related to structural design and verificationof payloads to be launched and/or retrieved by the Space Shuttle. To demonstrate the generalapproach used to implement these requirements in the development of a typical Shuttle payload, theWide Field/Planetary Camera II, a second generation science instrument currently being developed bythe Jet Propulsion Laboratory (JPL) for the Hubble Space Telescope is used as an example. Inaddition to verification of strength and dynamic characteristics, special emphasis is placed upon thefracture control implementation process, including parts classification and fracture controlacceptability.
Person-centered endoscopy safety checklist: Development, implementation, and evaluation
Dubois, Hanna; Schmidt, Peter T; Creutzfeldt, Johan; Bergenmar, Mia
2017-01-01
AIM To describe the development and implementation of a person-centered endoscopy safety checklist and to evaluate the effects of a “checklist intervention”. METHODS The checklist, based on previously published safety checklists, was developed and locally adapted, taking patient safety aspects into consideration and using a person-centered approach. This novel checklist was introduced to the staff of an endoscopy unit at a Stockholm University Hospital during half-day seminars and team training sessions. Structured observations of the endoscopy team’s performance were conducted before and after the introduction of the checklist. In addition, questionnaires focusing on patient participation, collaboration climate, and patient safety issues were collected from patients and staff. RESULTS A person-centered safety checklist was developed and introduced by a multi-professional group in the endoscopy unit. A statistically significant increase in accurate patient identity verification by the physicians was noted (from 0% at baseline to 87% after 10 mo, P < 0.001), and remained high among nurses (93% at baseline vs 96% after 10 mo, P = nonsignificant). Observations indicated that the professional staff made frequent attempts to use the checklist, but compliance was suboptimal: All items in the observed nurse-led “summaries” were included in 56% of these interactions, and physicians participated by directly facing the patient in 50% of the interactions. On the questionnaires administered to the staff, items regarding collaboration and the importance of patient participation were rated more highly after the introduction of the checklist, but this did not result in statistical significance (P = 0.07/P = 0.08). The patients rated almost all items as very high both before and after the introduction of the checklist; hence, no statistical difference was noted. CONCLUSION The intervention led to increased patient identity verification by physicians - a patient safety improvement. Clear evidence of enhanced person-centeredness or team work was not found. PMID:29358869
2016-01-14
hyperproperty and a liveness hyperproperty. A verification technique for safety hyperproperties is given and is shown to generalize prior tech- niques for...liveness properties are affiliated with specific verification methods. An analogous theory for security policies would be appealing. The fact that security...verified by using invariance arguments. Our verification methodology generalizes prior work on using invariance arguments to verify information-flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. L. Sharp; R. T. McCracken
The Advanced Test Reactor (ATR) is a pressurized light-water reactor with a design thermal power of 250 MW. The principal function of the ATR is to provide a high neutron flux for testing reactor fuels and other materials. The reactor also provides other irradiation services such as radioisotope production. The ATR and its support facilities are located at the Test Reactor Area of the Idaho National Engineering and Environmental Laboratory (INEEL). An audit conducted by the Department of Energy's Office of Independent Oversight and Performance Assurance (DOE OA) raised concerns that design conditions at the ATR were not adequately analyzedmore » in the safety analysis and that legacy design basis management practices had the potential to further impact safe operation of the facility.1 The concerns identified by the audit team, and issues raised during additional reviews performed by ATR safety analysts, were evaluated through the unreviewed safety question process resulting in shutdown of the ATR for more than three months while these concerns were resolved. Past management of the ATR safety basis, relative to facility design basis management and change control, led to concerns that discrepancies in the safety basis may have developed. Although not required by DOE orders or regulations, not performing design basis verification in conjunction with development of the 10 CFR 830 Subpart B upgraded safety basis allowed these potential weaknesses to be carried forward. Configuration management and a clear definition of the existing facility design basis have a direct relation to developing and maintaining a high quality safety basis which properly identifies and mitigates all hazards and postulated accident conditions. These relations and the impact of past safety basis management practices have been reviewed in order to identify lessons learned from the safety basis upgrade process and appropriate actions to resolve possible concerns with respect to the current ATR safety basis. The need for a design basis reconstitution program for the ATR has been identified along with the use of sound configuration management principles in order to support safe and efficient facility operation.« less
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less
Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification
NASA Technical Reports Server (NTRS)
Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle
2011-01-01
NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process
V&V Plan for FPGA-based ESF-CCS Using System Engineering Approach.
NASA Astrophysics Data System (ADS)
Maerani, Restu; Mayaka, Joyce; El Akrat, Mohamed; Cheon, Jung Jae
2018-02-01
Instrumentation and Control (I&C) systems play an important role in maintaining the safety of Nuclear Power Plant (NPP) operation. However, most current I&C safety systems are based on Programmable Logic Controller (PLC) hardware, which is difficult to verify and validate, and is susceptible to software common cause failure. Therefore, a plan for the replacement of the PLC-based safety systems, such as the Engineered Safety Feature - Component Control System (ESF-CCS), with Field Programmable Gate Arrays (FPGA) is needed. By using a systems engineering approach, which ensures traceability in every phase of the life cycle, from system requirements, design implementation to verification and validation, the system development is guaranteed to be in line with the regulatory requirements. The Verification process will ensure that the customer and stakeholder’s needs are satisfied in a high quality, trustworthy, cost efficient and schedule compliant manner throughout a system’s entire life cycle. The benefit of the V&V plan is to ensure that the FPGA based ESF-CCS is correctly built, and to ensure that the measurement of performance indicators has positive feedback that “do we do the right thing” during the re-engineering process of the FPGA based ESF-CCS.
Verification and Implementation of Operations Safety Controls for Flight Missions
NASA Technical Reports Server (NTRS)
Smalls, James R.; Jones, Cheryl L.; Carrier, Alicia S.
2010-01-01
There are several engineering disciplines, such as reliability, supportability, quality assurance, human factors, risk management, safety, etc. Safety is an extremely important engineering specialty within NASA, and the consequence involving a loss of crew is considered a catastrophic event. Safety is not difficult to achieve when properly integrated at the beginning of each space systems project/start of mission planning. The key is to ensure proper handling of safety verification throughout each flight/mission phase. Today, Safety and Mission Assurance (S&MA) operations engineers continue to conduct these flight product reviews across all open flight products. As such, these reviews help ensure that each mission is accomplished with safety requirements along with controls heavily embedded in applicable flight products. Most importantly, the S&MA operations engineers are required to look for important design and operations controls so that safety is strictly adhered to as well as reflected in the final flight product.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.
Final Report - Regulatory Considerations for Adaptive Systems
NASA Technical Reports Server (NTRS)
Wilkinson, Chris; Lynch, Jonathan; Bharadwaj, Raj
2013-01-01
This report documents the findings of a preliminary research study into new approaches to the software design assurance of adaptive systems. We suggest a methodology to overcome the software validation and verification difficulties posed by the underlying assumption of non-adaptive software in the requirementsbased- testing verification methods in RTCA/DO-178B and C. An analysis of the relevant RTCA/DO-178B and C objectives is presented showing the reasons for the difficulties that arise in showing satisfaction of the objectives and suggested additional means by which they could be satisfied. We suggest that the software design assurance problem for adaptive systems is principally one of developing correct and complete high level requirements and system level constraints that define the necessary system functional and safety properties to assure the safe use of adaptive systems. We show how analytical techniques such as model based design, mathematical modeling and formal or formal-like methods can be used to both validate the high level functional and safety requirements, establish necessary constraints and provide the verification evidence for the satisfaction of requirements and constraints that supplements conventional testing. Finally the report identifies the follow-on research topics needed to implement this methodology.
Formal Verification of Complex Systems based on SysML Functional Requirements
2014-12-23
Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Loads and Structural Dynamics Requirements for Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Schultz, Kenneth P.
2011-01-01
The purpose of this document is to establish requirements relating to the loads and structural dynamics technical discipline for NASA and commercial spaceflight launch vehicle and spacecraft hardware. Requirements are defined for the development of structural design loads and recommendations regarding methodologies and practices for the conduct of load analyses are provided. As such, this document represents an implementation of NASA STD-5002. Requirements are also defined for structural mathematical model development and verification to ensure sufficient accuracy of predicted responses. Finally, requirements for model/data delivery and exchange are specified to facilitate interactions between Launch Vehicle Providers (LVPs), Spacecraft Providers (SCPs), and the NASA Technical Authority (TA) providing insight/oversight and serving in the Independent Verification and Validation role. In addition to the analysis-related requirements described above, a set of requirements are established concerning coupling phenomena or other interaction between structural dynamics and aerodynamic environments or control or propulsion system elements. Such requirements may reasonably be considered structure or control system design criteria, since good engineering practice dictates consideration of and/or elimination of the identified conditions in the development of those subsystems. The requirements are included here, however, to ensure that such considerations are captured in the design space for launch vehicles (LV), spacecraft (SC) and the Launch Abort Vehicle (LAV). The requirements in this document are focused on analyses to be performed to develop data needed to support structural verification. As described in JSC 65828, Structural Design Requirements and Factors of Safety for Spaceflight Hardware, implementation of the structural verification requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The requirement for and expected contents of the SVP are defined in JSC 65828. The SVP may also document unique verifications that meet or exceed these requirements with Technical Authority approval.
NASA Technical Reports Server (NTRS)
Keeley, J. T.
1976-01-01
Guidelines and general requirements applicable to the development of instrument flight hardware intended for use on the GSFC Shuttle Scientific Payloads Program are given. Criteria, guidelines, and an organized approach to specifying the appropriate level of requirements for each instrument in order to permit its development at minimum cost while still assuring crew safety, are included. It is recognized that the instruments for these payloads will encompass wide ranges of complexity, cost, development risk, and safety hazards. The flexibility required to adapt the controls, documentation, and verification requirements in accord with the specific instrument is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, R M
1987-05-01
Guidelines have been developed to evaluate the seismic adequacy of the anchorage of various classes of electrical and mechanical equipment in nuclear power plants covered by NRC Unresolved Safety Issue A-46. The guidelines consist of screening tables that give the seismic anchorage capacity as a function of key equipment and anchorage fasteners, inspection checklists for field verification of anchorage adequacy, and provisions for outliers that can be used to further investigate anchorages that cannot be verified in the field. The screening tables are based on an analysis of the anchorage forces developed by common equipment types and on strength criteriamore » to quantify the holding power of anchor bolts and welds. The strength criteria for expansion anchor bolts were developed by collecting and analyzing a large quantity of test data.« less
30 CFR 250.911 - If my platform is subject to the Platform Verification Program, what must I do?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 2 2012-07-01 2012-07-01 false If my platform is subject to the Platform Verification Program, what must I do? 250.911 Section 250.911 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.911 If my platform is subject...
30 CFR 250.911 - If my platform is subject to the Platform Verification Program, what must I do?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 2 2013-07-01 2013-07-01 false If my platform is subject to the Platform Verification Program, what must I do? 250.911 Section 250.911 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.911 If my platform is subject...
30 CFR 250.911 - If my platform is subject to the Platform Verification Program, what must I do?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 2 2014-07-01 2014-07-01 false If my platform is subject to the Platform Verification Program, what must I do? 250.911 Section 250.911 Mineral Resources BUREAU OF SAFETY AND... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.911 If my platform is subject...
NASA Technical Reports Server (NTRS)
Bernstein, Karen S.; Kujala, Rod; Fogt, Vince; Romine, Paul
2011-01-01
This document establishes the structural requirements for human-rated spaceflight hardware including launch vehicles, spacecraft and payloads. These requirements are applicable to Government Furnished Equipment activities as well as all related contractor, subcontractor and commercial efforts. These requirements are not imposed on systems other than human-rated spacecraft, such as ground test articles, but may be tailored for use in specific cases where it is prudent to do so such as for personnel safety or when assets are at risk. The requirements in this document are focused on design rather than verification. Implementation of the requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The SVP may also document unique verifications that meet or exceed these requirements with NASA Technical Authority approval.
Validation and Verification (V&V) of Safety-Critical Systems Operating Under Off-Nominal Conditions
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
2012-01-01
Loss of control (LOC) remains one of the largest contributors to aircraft fatal accidents worldwide. Aircraft LOC accidents are highly complex in that they can result from numerous causal and contributing factors acting alone or more often in combination. Hence, there is no single intervention strategy to prevent these accidents. Research is underway at the National Aeronautics and Space Administration (NASA) in the development of advanced onboard system technologies for preventing or recovering from loss of vehicle control and for assuring safe operation under off-nominal conditions associated with aircraft LOC accidents. The transition of these technologies into the commercial fleet will require their extensive validation and verification (V&V) and ultimate certification. The V&V of complex integrated systems poses highly significant technical challenges and is the subject of a parallel research effort at NASA. This chapter summarizes the V&V problem and presents a proposed process that could be applied to complex integrated safety-critical systems developed for preventing aircraft LOC accidents. A summary of recent research accomplishments in this effort is referenced.
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I
2015-06-15
Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less
Structural Deterministic Safety Factors Selection Criteria and Verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, R M
1987-05-01
Guidelines have been developed to evaluate the seismic adequacy of the anchorage of various classes of electrical and mechanical equipment in nuclear power plants covered by NRC Unresolved Safety Issue A-46. The guidelines consist of screening tables that give the seismic anchorage capacity as a function of key equipment and anchorage fasteners, inspection checklists for field verification of anchorage adequacy, and provisions for outliers that can be used to further investigate anchorages that cannot be verified in the field. The screening tables are based on an analysis of the anchorage forces developed by common equipment types and on strength criteriamore » to quantify the holding power of anchor bolts and welds. The strength criteria for expansion anchor bolts were developed by collecting and analyzing a large quantity of test data.« less
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
Static test induced loads verification beyond elastic limit
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1996-01-01
Increasing demands for reliable and least-cost high-performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total-inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.
Static test induced loads verification beyond elastic limit
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1996-01-01
Increasing demands for reliable and least-cost high performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large, high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola
2004-01-01
Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.
Stratway: A Modular Approach to Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.
2011-01-01
In this paper we introduce Stratway, a modular approach to finding long-term strategic resolutions to conflicts between aircraft. The modular approach provides both advantages and disadvantages. Our primary concern is to investigate the implications on the verification of safety-critical properties of a strategic resolution algorithm. By partitioning the problem into verifiable modules much stronger verification claims can be established. Since strategic resolution involves searching for solutions over an enormous state space, Stratway, like most similar algorithms, searches these spaces by applying heuristics, which present especially difficult verification challenges. An advantage of a modular approach is that it makes a clear distinction between the resolution function and the trajectory generation function. This allows the resolution computation to be independent of any particular vehicle. The Stratway algorithm was developed in both Java and C++ and is available through a open source license. Additionally there is a visualization application that is helpful when analyzing and quickly creating conflict scenarios.
9 CFR 416.17 - Agency verification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Agency verification. 416.17 Section 416.17 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... (d) Direct observation or testing to assess the sanitary conditions in the establishment. ...
9 CFR 416.17 - Agency verification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Agency verification. 416.17 Section 416.17 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... (d) Direct observation or testing to assess the sanitary conditions in the establishment. ...
9 CFR 416.17 - Agency verification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Agency verification. 416.17 Section 416.17 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... (d) Direct observation or testing to assess the sanitary conditions in the establishment. ...
9 CFR 416.17 - Agency verification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Agency verification. 416.17 Section 416.17 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... (d) Direct observation or testing to assess the sanitary conditions in the establishment. ...
9 CFR 416.17 - Agency verification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Agency verification. 416.17 Section 416.17 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... (d) Direct observation or testing to assess the sanitary conditions in the establishment. ...
Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.
Baldwin, Abigail; Rodriguez, Elizabeth S
2016-02-01
The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.
NASA Technical Reports Server (NTRS)
Mackall, D. A.; Ishmael, S. D.; Regenie, V. A.
1983-01-01
Qualification considerations for assuring the safety of a life-critical digital flight control system include four major areas: systems interactions, verification, validation, and configuration control. The AFTI/F-16 design, development, and qualification illustrate these considerations. In this paper, qualification concepts, procedures, and methodologies are discussed and illustrated through specific examples.
Photovoltaic system criteria documents. Volume 5: Safety criteria for photovoltaic applications
NASA Technical Reports Server (NTRS)
Koenig, John C.; Billitti, Joseph W.; Tallon, John M.
1979-01-01
Methodology is described for determining potential safety hazards involved in the construction and operation of photovoltaic power systems and provides guidelines for the implementation of safety considerations in the specification, design and operation of photovoltaic systems. Safety verification procedures for use in solar photovoltaic systems are established.
Fuzzy Logic Controller Stability Analysis Using a Satisfiability Modulo Theories Approach
NASA Technical Reports Server (NTRS)
Arnett, Timothy; Cook, Brandon; Clark, Matthew A.; Rattan, Kuldip
2017-01-01
While many widely accepted methods and techniques exist for validation and verification of traditional controllers, at this time no solutions have been accepted for Fuzzy Logic Controllers (FLCs). Due to the highly nonlinear nature of such systems, and the fact that developing a valid FLC does not require a mathematical model of the system, it is quite difficult to use conventional techniques to prove controller stability. Since safety-critical systems must be tested and verified to work as expected for all possible circumstances, the fact that FLC controllers cannot be tested to achieve such requirements poses limitations on the applications for such technology. Therefore, alternative methods for verification and validation of FLCs needs to be explored. In this study, a novel approach using formal verification methods to ensure the stability of a FLC is proposed. Main research challenges include specification of requirements for a complex system, conversion of a traditional FLC to a piecewise polynomial representation, and using a formal verification tool in a nonlinear solution space. Using the proposed architecture, the Fuzzy Logic Controller was found to always generate negative feedback, but inconclusive for Lyapunov stability.
Formal Verification Toolkit for Requirements and Early Design Stages
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Miller, Sheena Judson
2011-01-01
Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.
9 CFR 417.8 - Agency verification.
Code of Federal Regulations, 2014 CFR
2014-01-01
....8 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.8 Agency verification. FSIS will verify the... plan or system; (f) Direct observation or measurement at a CCP; (g) Sample collection and analysis to...
9 CFR 417.8 - Agency verification.
Code of Federal Regulations, 2012 CFR
2012-01-01
....8 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.8 Agency verification. FSIS will verify the... plan or system; (f) Direct observation or measurement at a CCP; (g) Sample collection and analysis to...
9 CFR 417.8 - Agency verification.
Code of Federal Regulations, 2013 CFR
2013-01-01
....8 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.8 Agency verification. FSIS will verify the... plan or system; (f) Direct observation or measurement at a CCP; (g) Sample collection and analysis to...
9 CFR 417.8 - Agency verification.
Code of Federal Regulations, 2010 CFR
2010-01-01
....8 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.8 Agency verification. FSIS will verify the... plan or system; (f) Direct observation or measurement at a CCP; (g) Sample collection and analysis to...
9 CFR 417.8 - Agency verification.
Code of Federal Regulations, 2011 CFR
2011-01-01
....8 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.8 Agency verification. FSIS will verify the... plan or system; (f) Direct observation or measurement at a CCP; (g) Sample collection and analysis to...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... non-federal community, including the academic, commercial, and public safety sectors, to implement a..., Verification, Demonstration and Trials: Technical Workshop II on Coordinating Federal Government/Private Sector Spectrum Innovation Testing Needs AGENCY: The National Coordination Office (NCO) for Networking and...
Verification and Validation of Flight-Critical Systems
NASA Technical Reports Server (NTRS)
Brat, Guillaume
2010-01-01
For the first time in many years, the NASA budget presented to congress calls for a focused effort on the verification and validation (V&V) of complex systems. This is mostly motivated by the results of the VVFCS (V&V of Flight-Critical Systems) study, which should materialize as a a concrete effort under the Aviation Safety program. This talk will present the results of the study, from requirements coming out of discussions with the FAA and the Joint Planning and Development Office (JPDO) to technical plan addressing the issue, and its proposed current and future V&V research agenda, which will be addressed by NASA Ames, Langley, and Dryden as well as external partners through NASA Research Announcements (NRA) calls. This agenda calls for pushing V&V earlier in the life cycle and take advantage of formal methods to increase safety and reduce cost of V&V. I will present the on-going research work (especially the four main technical areas: Safety Assurance, Distributed Systems, Authority and Autonomy, and Software-Intensive Systems), possible extensions, and how VVFCS plans on grounding the research in realistic examples, including an intended V&V test-bench based on an Integrated Modular Avionics (IMA) architecture and hosted by Dryden.
Code of Federal Regulations, 2011 CFR
2011-07-01
... certificate or a Safety Management Certificate; (3) Periodic audits including— (i) An annual verification... safety management audit and when is it required to be completed? 96.320 Section 96.320 Navigation and... SAFE OPERATION OF VESSELS AND SAFETY MANAGEMENT SYSTEMS How Will Safety Management Systems Be...
Autonomy Software: V&V Challenges and Characteristics
NASA Technical Reports Server (NTRS)
Schumann, Johann; Visser, Willem
2006-01-01
The successful operation of unmanned air vehicles requires software with a high degree of autonomy. Only if high level functions can be carried out without human control and intervention, complex missions in a changing and potentially unknown environment can be carried out successfully. Autonomy software is highly mission and safety critical: failures, caused by flaws in the software cannot only jeopardize the mission, but could also endanger human life (e.g., a crash of an UAV in a densely populated area). Due to its large size, high complexity, and use of specialized algorithms (planner, constraint-solver, etc.), autonomy software poses specific challenges for its verification, validation, and certification. -- - we have carried out a survey among researchers aid scientists at NASA to study these issues. In this paper, we will present major results of this study, discussing the broad spectrum. of notions and characteristics of autonomy software and its challenges for design and development. A main focus of this survey was to evaluate verification and validation (V&V) issues and challenges, compared to the development of "traditional" safety-critical software. We will discuss important issues in V&V of autonomous software and advanced V&V tools which can help to mitigate software risks. Results of this survey will help to identify and understand safety concerns in autonomy software and will lead to improved strategies for mitigation of these risks.
18 CFR 12.13 - Verification form.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification form. 12.13 Section 12.13 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT SAFETY OF WATER POWER PROJECTS AND PROJECT WORKS...
9 CFR 417.4 - Validation, Verification, Reassessment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... analysis. Any establishment that does not have a HACCP plan because a hazard analysis has revealed no food.... 417.4 Section 417.4 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.4 Validation, Verification...
9 CFR 417.4 - Validation, Verification, Reassessment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... analysis. Any establishment that does not have a HACCP plan because a hazard analysis has revealed no food.... 417.4 Section 417.4 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.4 Validation, Verification...
Formal Foundations for Hierarchical Safety Cases
NASA Technical Reports Server (NTRS)
Denney, Ewen; Pai, Ganesh; Whiteside, Iain
2015-01-01
Safety cases are increasingly being required in many safety-critical domains to assure, using structured argumentation and evidence, that a system is acceptably safe. However, comprehensive system-wide safety arguments present appreciable challenges to develop, understand, evaluate, and manage, partly due to the volume of information that they aggregate, such as the results of hazard analysis, requirements analysis, testing, formal verification, and other engineering activities. Previously, we have proposed hierarchical safety cases, hicases, to aid the comprehension of safety case argument structures. In this paper, we build on a formal notion of safety case to formalise the use of hierarchy as a structuring technique, and show that hicases satisfy several desirable properties. Our aim is to provide a formal, theoretical foundation for safety cases. In particular, we believe that tools for high assurance systems should be granted similar assurance to the systems to which they are applied. To this end, we formally specify and prove the correctness of key operations for constructing and managing hicases, which gives the specification for implementing hicases in AdvoCATE, our toolset for safety case automation. We motivate and explain the theory with the help of a simple running example, extracted from a real safety case and developed using AdvoCATE.
The VATES-Diamond as a Verifier's Best Friend
NASA Astrophysics Data System (ADS)
Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz
Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Maintaining ocular safety with light exposure, focusing on devices for optogenetic stimulation
Yan, Boyuan; Vakulenko, Maksim; Min, Seok-Hong; Hauswirth, William W.; Nirenberg, Sheila
2016-01-01
Optogenetics methods are rapidly being developed as therapeutic tools for treating neurological diseases, in particular, retinal degenerative diseases. A critical component of the development is testing the safety of the light stimulation used to activate the optogenetic proteins. While the stimulation needs to be sufficient to produce neural responses in the targeted retinal cell class, it also needs to be below photochemical and photothermal limits known to cause ocular damage. The maximal permissible exposure is determined by a variety of factors, including wavelength, exposure duration, visual angle, pupil size, pulse width, pulse pattern, and repetition frequency. In this paper, we develop utilities to systematically and efficiently assess the contributions of these parameters in relation to the limits, following directly from the 2014 American National Standards Institute (ANSI). We also provide an array of stimulus protocols that fall within the bounds of both safety and effectiveness. Additional verification of safety is provided with a case study in rats using one of these protocols. PMID:26882975
Model-based engineering for medical-device software.
Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi
2010-01-01
This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2013-01-01
The NASA Engineering and Safety Center (NESC) received a request from the NASA Associate Administrator (AA) for Human Exploration and Operations Mission Directorate (HEOMD), to quantitatively evaluate the individual performance of three light detection and ranging (LIDAR) rendezvous sensors flown as orbiter's development test objective on Space Transportation System (STS)-127, STS-133, STS-134, and STS-135. This document contains the outcome of the NESC assessment.
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2010 CFR
2010-10-01
... validation. The RSPP must require the identification of verification and validation methods for the... to be used in the verification and validation process, consistent with appendix C to this part. The... information. (3) If no action is taken on the petition within 180 days, the petition remains pending for...
Innovative safety valve selection techniques and data.
Miller, Curt; Bredemyer, Lindsey
2007-04-11
The new valve data resources and modeling tools that are available today are instrumental in verifying that that safety levels are being met in both current installations and project designs. If the new ISA 84 functional safety practices are followed closely, good industry validated data used, and a user's maintenance integrity program strictly enforced, plants should feel confident that their design has been quantitatively reinforced. After 2 years of exhaustive reliability studies, there are now techniques and data available to support this safety system component deficiency. Everyone who has gone through the process of safety integrity level (SIL) verification (i.e. reliability math) will appreciate the progress made in this area. The benefits of these advancements are improved safety with lower lifecycle costs such as lower capital investment and/or longer testing intervals. This discussion will start with a review of the different valve, actuator, and solenoid/positioner combinations that can be used and their associated application restraints. Failure rate reliability studies (i.e. FMEDA) and data associated with the final combinations will then discussed. Finally, the impact of the selections on each safety system's SIL verification will be reviewed.
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
Challenges in High-Assurance Runtime Verification
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.
2016-01-01
Safety-critical systems are growing more complex and becoming increasingly autonomous. Runtime Verification (RV) has the potential to provide protections when a system cannot be assured by conventional means, but only if the RV itself can be trusted. In this paper, we proffer a number of challenges to realizing high-assurance RV and illustrate how we have addressed them in our research. We argue that high-assurance RV provides a rich target for automated verification tools in hope of fostering closer collaboration among the communities.
Verification of chemistry reference ranges using a simple method in sub-Saharan Africa
Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania
2016-01-01
Background Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. Objectives We report on an easy and cost-saving method to verify CRRs. Methods Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory. PMID:28879112
Verification of chemistry reference ranges using a simple method in sub-Saharan Africa.
De Baetselier, Irith; Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania
2016-01-01
Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. We report on an easy and cost-saving method to verify CRRs. Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory.
46 CFR 61.40-6 - Periodic safety tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Periodic safety tests. 61.40-6 Section 61.40-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-6 Periodic safety...
46 CFR 61.40-6 - Periodic safety tests.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Periodic safety tests. 61.40-6 Section 61.40-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-6 Periodic safety...
46 CFR 61.40-6 - Periodic safety tests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Periodic safety tests. 61.40-6 Section 61.40-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-6 Periodic safety...
46 CFR 61.40-6 - Periodic safety tests.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Periodic safety tests. 61.40-6 Section 61.40-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-6 Periodic safety...
46 CFR 61.40-6 - Periodic safety tests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Periodic safety tests. 61.40-6 Section 61.40-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-6 Periodic safety...
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1996-01-01
Research in the year covered by this reporting period has been primarily directed toward: continued development of mock-ups of computer screens for operator of a digital reactor control system; development of a reactor simulation to permit testing of various elements of the control system; formal specification of user interfaces; fault-tree analysis including software; evaluation of formal verification techniques; and continued development of a software documentation system. Technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers.
Guidance and Control Software Project Data - Volume 3: Verification Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.
Houck, Constance S; Deshpande, Jayant K; Flick, Randall P
2017-06-01
The Task Force for Children's Surgical Care, an ad-hoc multidisciplinary group of invited leaders in pediatric perioperative medicine, was assembled in May 2012 to consider approaches to optimize delivery of children's surgical care in today's competitive national healthcare environment. Over the subsequent 3 years, with support from the American College of Surgeons (ACS) and Children's Hospital Association (CHA), the group established principles regarding perioperative resource standards, quality improvement and safety processes, data collection, and verification that were used to develop an ACS-sponsored Children's Surgery Verification and Quality Improvement Program (ACS CSV). The voluntary ACS CSV was officially launched in January 2017 and more than 125 pediatric surgical programs have expressed interest in verification. ACS CSV-verified programs have specific requirements for pediatric anesthesia leadership, resources, and the availability of pediatric anesthesiologists or anesthesiologists with pediatric expertise to care for infants and young children. The present review outlines the history of the ACS CSV, key elements of the program, and the standards specific to pediatric anesthesiology. As with the pediatric trauma programs initiated more than 40 years ago, this program has the potential to significantly improve surgical care for infants and children in the United States and Canada.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaSalle, F.R.; Golbeg, P.R.; Chenault, D.M.
For reactor and nuclear facilities, both Title 10, Code of Federal Regulations, Part 50, and US Department of Energy Order 6430.1A require assessments of the interaction of non-Safety Class 1 piping and equipment with Safety Class 1 piping and equipment during a seismic event to maintain the safety function. The safety class systems of nuclear reactors or nuclear facilities are designed to the applicable American Society of Mechanical Engineers standards and Seismic Category 1 criteria that require rigorous analysis, construction, and quality assurance. Because non-safety class systems are generally designed to lesser standards and seismic criteria, they may become missilesmore » during a safe shutdown earthquake. The resistance of piping, tubing, and equipment to seismically generated missiles is addressed in the paper. Gross plastic and local penetration failures are considered with applicable test verification. Missile types and seismic zones of influence are discussed. Field qualification data are also developed for missile evaluation.« less
NASA Technical Reports Server (NTRS)
Bhat, Biliyar N.
2008-01-01
Ares I Crew Launch Vehicle Upper Stage is designed and developed based on sound systems engineering principles. Systems Engineering starts with Concept of Operations and Mission requirements, which in turn determine the launch system architecture and its performance requirements. The Ares I-Upper Stage is designed and developed to meet these requirements. Designers depend on the support from materials, processes and manufacturing during the design, development and verification of subsystems and components. The requirements relative to reliability, safety, operability and availability are also dependent on materials availability, characterization, process maturation and vendor support. This paper discusses the roles and responsibilities of materials and manufacturing engineering during the various phases of Ares IUS development, including design and analysis, hardware development, test and verification. Emphasis is placed how materials, processes and manufacturing support is integrated over the Upper Stage Project, both horizontally and vertically. In addition, the paper describes the approach used to ensure compliance with materials, processes, and manufacturing requirements during the project cycle, with focus on hardware systems design and development.
Software Safety Analysis of a Flight Guidance System
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-06
... DEPARTMENT OF AGRICULTURE Food Safety and Inspection Service 9 CFR Parts 417 [Docket No. FSIS-2012... Verification Procedures AGENCY: Food Safety and Inspection Service, USDA. ACTION: Compliance with the HACCP system regulations and request for comments SUMMARY: The Food Safety and Inspection Service (FSIS) is...
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Steve
2011-01-01
The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, positionmore » and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.« less
ROVER : prototype roving verification van : transportation project summary
DOT National Transportation Integrated Search
1997-06-01
The purpose of this project is to verify the safety and legality of commercial vehicles at both fixed and mobile roadside sites. improving the efficiency, safety. and effectiveness of commercial vehicle operations through the use of timely, accurate ...
Proving autonomous vehicle and advanced driver assistance systems safety : final research report.
DOT National Transportation Integrated Search
2016-02-15
The main objective of this project was to provide technology for answering : crucial safety and correctness questions about verification of autonomous : vehicle and advanced driver assistance systems based on logic. : In synergistic activities, we ha...
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Astronaut tool development: An orbital replaceable unit-portable handhold
NASA Technical Reports Server (NTRS)
Redmon, John W., Jr.
1989-01-01
A tool to be used during astronaut Extra-Vehicular Activity (EVA) replacement of spent or defective electrical/electronic component boxes is described. The generation of requirements and design philosophies are detailed, as well as specifics relating to mechanical development, interface verifications, testing, and astronaut feedback. Findings are presented in the form of: (1) a design which is universally applicable to spacecraft component replacement, and (2) guidelines that the designer of orbital replacement units might incorporate to enhance spacecraft on-orbit maintainability and EVA mission safety.
Unmanned Vehicle Material Flammability Test
NASA Technical Reports Server (NTRS)
Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian;
2012-01-01
Microgravity fire behaviour remains poorly understood and a significant risk for spaceflight An experiment is under development that will provide the first real opportunity to examine this issue focussing on two objectives: a) Flame Spread. b) Material Flammability. This experiment has been shown to be feasible on both ESA's ATV and Orbital Science's Cygnus vehicles with the Cygnus as the current base-line carrier. An international topical team has been formed to develop concepts for that experiment and support its implementation: a) Pressure Rise prediction. b) Sample Material Selection. This experiment would be a landmark for spacecraft fire safety with the data and subsequent analysis providing much needed verification of spacecraft fire safety protocols for the crews of future exploration vehicles and habitats.
Tiger Team Assessment of the Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-01
The purpose of the safety and health assessment was to determine the effectiveness of representative safety and health programs at the Los Alamos National Laboratory (LANL). Within the safety and health programs at LANL, performance was assessed in the following technical areas: Organization and Administration, Quality Verification, Operations, Maintenance, Training and Certification, Auxiliary Systems, Emergency Preparedness, Technical Support, Packaging and Transportation, Nuclear Criticality Safety, Security/Safety Interface, Experimental Activities, Site/Facility Safety Review, Radiological Protection, Personnel Protection, Worker Safety and Health (OSHA) Compliance, Fire Protection, Aviation Safety, Explosives Safety, Natural Phenomena, and Medical Services.
NASA Astrophysics Data System (ADS)
Rankin, Drew J.; Jiang, Jin
2011-04-01
Verification and validation (V&V) of safety control system quality and performance is required prior to installing control system hardware within nuclear power plants (NPPs). Thus, the objective of the hardware-in-the-loop (HIL) platform introduced in this paper is to verify the functionality of these safety control systems. The developed platform provides a flexible simulated testing environment which enables synchronized coupling between the real and simulated world. Within the platform, National Instruments (NI) data acquisition (DAQ) hardware provides an interface between a programmable electronic system under test (SUT) and a simulation computer. Further, NI LabVIEW resides on this remote DAQ workstation for signal conversion and routing between Ethernet and standard industrial signals as well as for user interface. The platform is applied to the testing of a simplified implementation of Canadian Deuterium Uranium (CANDU) shutdown system no. 1 (SDS1) which monitors only the steam generator level of the simulated NPP. CANDU NPP simulation is performed on a Darlington NPP desktop training simulator provided by Ontario Power Generation (OPG). Simplified SDS1 logic is implemented on an Invensys Tricon v9 programmable logic controller (PLC) to test the performance of both the safety controller and the implemented logic. Prior to HIL simulation, platform availability of over 95% is achieved for the configuration used during the V&V of the PLC. Comparison of HIL simulation results to benchmark simulations shows good operational performance of the PLC following a postulated initiating event (PIE).
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations. Volume 2; Appendices
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
This NASA Engineering and Safety Center (NESC) assessment was established to develop a set of time histories for the flight behavior of increasingly complex example aerospacecraft that could be used to partially validate various simulation frameworks. The assessment was conducted by representatives from several NASA Centers and an open-source simulation project. This document contains details on models, implementation, and results.
Preliminary design review package on air flat plate collector for solar heating and cooling system
NASA Technical Reports Server (NTRS)
1977-01-01
Guidelines to be used in the development and fabrication of a prototype air flat plate collector subsystem containing 320 square feet (10-4 ft x 8 ft panels) of collector area are presented. Topics discussed include: (1) verification plan; (2) thermal analysis; (3) safety hazard analysis; (4) drawing list; (5) special handling, installation and maintenance tools; (6) structural analysis; and (7) selected drawings.
NASA Technical Reports Server (NTRS)
Oishi, Meeko; Tomlin, Claire; Degani, Asaf
2003-01-01
Human interaction with complex hybrid systems involves the user, the automation's discrete mode logic, and the underlying continuous dynamics of the physical system. Often the user-interface of such systems displays a reduced set of information about the entire system. In safety-critical systems, how can we identify user-interface designs which do not have adequate information, or which may confuse the user? Here we describe a methodology, based on hybrid system analysis, to verify that a user-interface contains information necessary to safely complete a desired procedure or task. Verification within a hybrid framework allows us to account for the continuous dynamics underlying the simple, discrete representations displayed to the user. We provide two examples: a car traveling through a yellow light at an intersection and an aircraft autopilot in a landing/go-around maneuver. The examples demonstrate the general nature of this methodology, which is applicable to hybrid systems (not fully automated) which have operational constraints we can pose in terms of safety. This methodology differs from existing work in hybrid system verification in that we directly account for the user's interactions with the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.
2011-02-01
This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Luckring, James M.; Morrison, Joseph H.; Blattnig, Steve R.; Green, Lawrence L.; Tripathi, Ram K.
2007-01-01
The National Aeronautics and Space Administration (NASA) recently issued an interim version of the Standard for Models and Simulations (M&S Standard) [1]. The action to develop the M&S Standard was identified in an internal assessment [2] of agency-wide changes needed in the wake of the Columbia Accident [3]. The primary goal of this standard is to ensure that the credibility of M&S results is properly conveyed to those making decisions affecting human safety or mission success criteria. The secondary goal is to assure that the credibility of the results from models and simulations meets the project requirements (for credibility). This presentation explains the motivation and key aspects of the M&S Standard, with a special focus on the requirements for verification, validation and uncertainty quantification. Some pilot applications of this standard to computational fluid dynamics applications will be provided as illustrations. The authors of this paper are the members of the team that developed the initial three drafts of the standard, the last of which benefited from extensive comments from most of the NASA Centers. The current version (number 4) incorporates modifications made by a team representing 9 of the 10 NASA Centers. A permanent version of the M&S Standard is expected by December 2007. The scope of the M&S Standard is confined to those uses of M&S that support program and project decisions that may affect human safety or mission success criteria. Such decisions occur, in decreasing order of importance, in the operations, the test & evaluation, and the design & analysis phases. Requirements are placed on (1) program and project management, (2) models, (3) simulations and analyses, (4) verification, validation and uncertainty quantification (VV&UQ), (5) recommended practices, (6) training, (7) credibility assessment, and (8) reporting results to decision makers. A key component of (7) and (8) is the use of a Credibility Assessment Scale, some of the details of which were developed in consultation with William Oberkampf, David Peercy and Timothy Trocano of Sandia National Laboratories. The focus of most of the requirements, including those for VV&UQ, is on the documentation of what was done and the reporting, using the Credibility Assessment Scale, of the level of rigor that was followed. The aspects of one option for the Credibilty Assessment Scale are (1) code verification, (2) solution verification, (3) validation, (4) predictive capability, (5) technical review, (6) process control, and (7) operator and analyst qualification.
Verified compilation of Concurrent Managed Languages
2017-11-01
designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A
2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter
2007-08-23
Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust
Applying Formal Verification Techniques to Ambient Assisted Living Systems
NASA Astrophysics Data System (ADS)
Benghazi, Kawtar; Visitación Hurtado, María; Rodríguez, María Luisa; Noguera, Manuel
This paper presents a verification approach based on timed traces semantics and MEDISTAM-RT [1] to check the fulfillment of non-functional requirements, such as timeliness and safety, and assure the correct functioning of the Ambient Assisted Living (AAL) systems. We validate this approach by its application to an Emergency Assistance System for monitoring people suffering from cardiac alteration with syncope.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Schumann, Johann; Guenther, Kurt; Bosworth, John
2006-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable autonomous flight control and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments [1-2]. At the present time, however, it is unknown how adaptive algorithms can be routinely verified, validated, and certified for use in safety-critical applications. Rigorous methods for adaptive software verification end validation must be developed to ensure that. the control software functions as required and is highly safe and reliable. A large gap appears to exist between the point at which control system designers feel the verification process is complete, and when FAA certification officials agree it is complete. Certification of adaptive flight control software verification is complicated by the use of learning algorithms (e.g., neural networks) and degrees of system non-determinism. Of course, analytical efforts must be made in the verification process to place guarantees on learning algorithm stability, rate of convergence, and convergence accuracy. However, to satisfy FAA certification requirements, it must be demonstrated that the adaptive flight control system is also able to fail and still allow the aircraft to be flown safely or to land, while at the same time providing a means of crew notification of the (impending) failure. It was for this purpose that the NASA Ames Confidence Tool was developed [3]. This paper presents the Confidence Tool as a means of providing in-flight software assurance monitoring of an adaptive flight control system. The paper will present the data obtained from flight testing the tool on a specially modified F-15 aircraft designed to simulate loss of flight control faces.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Walsh, Timothy J.; Beltran, Chris J.; Stoker, Joshua B.; Mundy, Daniel W.; Parry, Mark D.; Bues, Martin; Fatyga, Mirek
2016-04-01
The use of radiation therapy for the treatment of cancer has been carried out clinically since the late 1800's. Early on however, it was discovered that a radiation dose sufficient to destroy cancer cells can also cause severe injury to surrounding healthy tissue. Radiation oncologists continually strive to find the perfect balance between a dose high enough to destroy the cancer and one that avoids damage to healthy organs. Spot scanning or "pencil beam" proton radiotherapy offers another option to improve on this. Unlike traditional photon therapy, proton beams stop in the target tissue, thus better sparing all organs beyond the targeted tumor. In addition, the beams are far narrower and thus can be more precisely "painted" onto the tumor, avoiding exposure to surrounding healthy tissue. To safely treat patients with proton beam radiotherapy, dose verification should be carried out for each plan prior to treatment. Proton dose verification systems are not currently commercially available so the Department of Radiation Oncology at the Mayo Clinic developed its own, called DOSeCHECK, which offers two distinct dose simulation methods: GPU-based Monte Carlo and CPU-based analytical. The three major components of the system include the web-based user interface, the Linux-based dose verification simulation engines, and the supporting services and components. The architecture integrates multiple applications, libraries, platforms, programming languages, and communication protocols and was successfully deployed in time for Mayo Clinic's first proton beam therapy patient. Having a simple, efficient application for dose verification greatly reduces staff workload and provides additional quality assurance, ultimately improving patient safety.
Principles and Benefits of Explicitly Designed Medical Device Safety Architecture.
Larson, Brian R; Jones, Paul; Zhang, Yi; Hatcliff, John
The complexity of medical devices and the processes by which they are developed pose considerable challenges to producing safe designs and regulatory submissions that are amenable to effective reviews. Designing an appropriate and clearly documented architecture can be an important step in addressing this complexity. Best practices in medical device design embrace the notion of a safety architecture organized around distinct operation and safety requirements. By explicitly separating many safety-related monitoring and mitigation functions from operational functionality, the aspects of a device most critical to safety can be localized into a smaller and simpler safety subsystem, thereby enabling easier verification and more effective reviews of claims that causes of hazardous situations are detected and handled properly. This article defines medical device safety architecture, describes its purpose and philosophy, and provides an example. Although many of the presented concepts may be familiar to those with experience in realization of safety-critical systems, this article aims to distill the essence of the approach and provide practical guidance that can potentially improve the quality of device designs and regulatory submissions.
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The formal specification and partial verification of the VIPER microprocessor is reviewed. The VIPER microprocessor was designed by RSRE, Malvern, England, for safety critical computing applications (e.g., aircraft, reactor control, medical instruments, armaments). The VIPER was carefully specified and partially verified in an attempt to provide a microprocessor with completely predictable operating characteristics. The specification of VIPER is divided into several levels of abstraction, from a gate-level description up to an instruction execution model. Although the consistency between certain levels was demonstrated with mechanically-assisted mathematical proof, the formal verification of VIPER was never completed.
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
Hu, Rui
2016-11-19
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
Verification Failures: What to Do When Things Go Wrong
NASA Astrophysics Data System (ADS)
Bertacco, Valeria
Every integrated circuit is released with latent bugs. The damage and risk implied by an escaped bug ranges from almost imperceptible to potential tragedy; unfortunately it is impossible to discern within this range before a bug has been exposed and analyzed. While the past few decades have witnessed significant efforts to improve verification methodology for hardware systems, these efforts have been far outstripped by the massive complexity of modern digital designs, leading to product releases for which an always smaller fraction of system's states has been verified. The news of escaped bugs in large market designs and/or safety critical domains is alarming because of safety and cost implications (due to replacements, lawsuits, etc.).
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Automating Nuclear-Safety-Related SQA Procedures with Custom Applications
Freels, James D.
2016-01-01
Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.
Independent verification and validation for Space Shuttle flight software
NASA Technical Reports Server (NTRS)
1992-01-01
The Committee for Review of Oversight Mechanisms for Space Shuttle Software was asked by the National Aeronautics and Space Administration's (NASA) Office of Space Flight to determine the need to continue independent verification and validation (IV&V) for Space Shuttle flight software. The Committee found that the current IV&V process is necessary to maintain NASA's stringent safety and quality requirements for man-rated vehicles. Therefore, the Committee does not support NASA's plan to eliminate funding for the IV&V effort in fiscal year 1993. The Committee believes that the Space Shuttle software development process is not adequate without IV&V and that elimination of IV&V as currently practiced will adversely affect the overall quality and safety of the software, both now and in the future. Furthermore, the Committee was told that no organization within NASA has the expertise or the manpower to replace the current IV&V function in a timely fashion, nor will building this expertise elsewhere necessarily reduce cost. Thus, the Committee does not recommend moving IV&V functions to other organizations within NASA unless the current IV&V is maintained for as long as it takes to build comparable expertise in the replacing organization.
NASA Technical Reports Server (NTRS)
Szatkowski, G. P.
1983-01-01
A computer simulation system has been developed for the Space Shuttle's advanced Centaur liquid fuel booster rocket, in order to conduct systems safety verification and flight operations training. This simulation utility is designed to analyze functional system behavior by integrating control avionics with mechanical and fluid elements, and is able to emulate any system operation, from simple relay logic to complex VLSI components, with wire-by-wire detail. A novel graphics data entry system offers a pseudo-wire wrap data base that can be easily updated. Visual subsystem operations can be selected and displayed in color on a six-monitor graphics processor. System timing and fault verification analyses are conducted by injecting component fault modes and min/max timing delays, and then observing system operation through a red line monitor.
Formal Verification of the AAMP-FV Microcode
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Greve, David A.; Wilding, Matthew M.; Srivas, Mandayam
1999-01-01
This report describes the experiences of Collins Avionics & Communications and SRI International in formally specifying and verifying the microcode in a Rockwell proprietary microprocessor, the AAMP-FV, using the PVS verification system. This project built extensively on earlier experiences using PVS to verify the microcode in the AAMP5, a complex, pipelined microprocessor designed for use in avionics displays and global positioning systems. While the AAMP5 experiment demonstrated the technical feasibility of formal verification of microcode, the steep learning curve encountered left unanswered the question of whether it could be performed at reasonable cost. The AAMP-FV project was conducted to determine whether the experience gained on the AAMP5 project could be used to make formal verification of microcode cost effective for safety-critical and high volume devices.
GRC Payload Hazard Assessment: Supporting the STS-107 Accident Investigation
NASA Technical Reports Server (NTRS)
Schoren, William R.; Zampino, Edward J.
2004-01-01
A hazard assessment was conducted on the GRC managed payloads in support of a NASA Headquarters Code Q request to examine STS-107 payloads and determine if they were credible contributors to the Columbia accident. This assessment utilized each payload's Final Flight Safety Data Package for hazard identification. An applicability assessment was performed and most of the hazards were eliminated because they dealt with payload operations or crew interactions. A Fault Tree was developed for all the hazards deemed applicable and the safety verification documentation was reviewed for these applicable hazards. At the completion of this hazard assessment, it was concluded that none of the GRC managed payloads were credible contributors to the Columbia accident.
Software-Based Safety Systems in Space - Learning from other Domains
NASA Astrophysics Data System (ADS)
Klicker, M.; Putzer, H.
2012-01-01
Increasing complexity and new emerging capabilities for manned and unmanned missions have been the hallmark of the past decades of space exploration. One of the drivers in this process was the ever increasing use of software and software-intensive systems to implement system functions necessary to the capabilities needed. The course of technological evolution suggests that this development will continue well into the future with a number of challenges for the safety community some of which shall be discussed in this paper. The current state of the art reveals a number of problems with developing and assessing safety critical software which explains the reluctance of the space community to rely on software-based safety measures to mitigate hazards. Among others, usually lack of trustworthy evidence of software integrity in all foreseeable situations and the difficulties to integrate software in the traditional safety analysis framework are cited. Experience from other domains and recent developments in modern software development methodologies and verification techniques are analysed for the suitability for space systems and an avionics architectural framework (see STANAG 4626) for the implementation of safety critical software is proposed. This is shown to create among other features the possibility of numerous degradation modes enhancing overall system safety and interoperability of computerized space systems. It also potentially simplifies international cooperation on a technical level by introducing a higher degree of compatibility. As software safety cannot be tested or argued into a system in hindsight, the development process and especially the architecture chosen are essential to establish safety properties for the software used to implement safety functions. The core of the safety argument revolves around the separation of different functions and software modules from each other by minimal coupling of functions and credible separation mechanisms in the architecture combined with rigorous development methodologies for the software itself.
V&V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and Validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission-critical software. V&V is a systems engineering discipline that evaluates the software in a systems context, and is currently applied during the development of a specific application system. In order to bring the effectiveness of V&V to bear within reuse-based software engineering, V&V must be incorporated within the domain engineering process.
ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications
NASA Technical Reports Server (NTRS)
Schumann, Johann; Denney, Ewen
2006-01-01
Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
Annual verifications--a tick-box exercise?
Walker, Gwen; Williams, David
2014-09-01
With the onus on healthcare providers and their staff to protect patients against all elements of 'avoidable harm' perhaps never greater, Gwen Walker, a highly experienced infection prevention control nurse specialist, and David Williams, MD of Approved Air, who has 30 years' experience in validation and verification of ventilation and ultraclean ventilation systems, examine changing requirements for, and trends in, operating theatre ventilation. Validation and verification reporting on such vital HVAC equipment should not, they argue, merely be viewed as a 'tick-box exercise'; it should instead 'comprehensively inform key stakeholders, and ultimately form part of clinical governance, thus protecting those ultimately named responsible for organisation-wide safety at Trust board level'.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Testing of Hand-Held Mine Detection Systems
2015-01-08
ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test
Advanced Software V&V for Civil Aviation and Autonomy
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2017-01-01
With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.
NASA Astrophysics Data System (ADS)
Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.
2017-11-01
The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.
Knowledge-based system verification and validation
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1990-01-01
The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARTER, R.P.
1999-11-19
The U.S. Department of Energy (DOE) commits to accomplishing its mission safely. To ensure this objective is met, DOE issued DOE P 450.4, Safety Management System Policy, and incorporated safety management into the DOE Acquisition Regulations ([DEAR] 48 CFR 970.5204-2 and 90.5204-78). Integrated Safety Management (ISM) requires contractors to integrate safety into management and work practices at all levels so that missions are achieved while protecting the public, the worker, and the environment. The contractor is required to describe the Integrated Safety Management System (ISMS) to be used to implement the safety performance objective.
NASA Technical Reports Server (NTRS)
Burns, H. D.; Mitchell, M. A.; McMillian, J. H.; Farner, B. R.; Harper, S. A.; Peralta, S. F.; Lowrey, N. M.; Ross, H. R.; Juarez, A.
2015-01-01
Since the 1990's, NASA's rocket propulsion test facilities at Marshall Space Flight Center (MSFC) and Stennis Space Center (SSC) have used hydrochlorofluorocarbon-225 (HCFC-225), a Class II ozone-depleting substance, to safety clean and verify the cleanliness of large scale propulsion oxygen systems and associated test facilities. In 2012 through 2014, test laboratories at MSFC, SSC, and Johnson Space Center-White Sands Test Facility collaborated to seek out, test, and qualify an environmentally preferred replacement for HCFC-225. Candidate solvents were selected, a test plan was developed, and the products were tested for materials compatibility, oxygen compatibility, cleaning effectiveness, and suitability for use in cleanliness verification and field cleaning operations. Honewell Soltice (TradeMark) Performance Fluid (trans-1-chloro-3,3, 3-trifluoropropene) was selected to replace HCFC-225 at NASA's MSFC and SSC rocket propulsion test facilities.
The Application of V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward
1996-01-01
Verification and Validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In reuse-based software engineering, decisions on the requirements, design and even implementation of domain assets can can be made prior to beginning development of a specific system. in order to bring the effectiveness of V&V to bear within reuse-based software engineering. V&V must be incorporated within the domain engineering process.
NASA Technical Reports Server (NTRS)
Bruce, Kevin R.
1989-01-01
An integrated autopilot/autothrottle was designed for flight test on the NASA TSRV B-737 aircraft. The system was designed using a total energy concept and is attended to achieve the following: (1) fuel efficiency by minimizing throttle activity; (2) low development and implementation costs by designing the control modes around a fixed inner loop design; and (3) maximum safety by preventing stall and engine overboost. The control law was designed initially using linear analysis; the system was developed using nonlinear simulations. All primary design requirements were satisfied.
VEG-01: Veggie Hardware Verification Testing
NASA Technical Reports Server (NTRS)
Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond
2013-01-01
The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.
Ares I-X Range Safety Flight Envelope Analysis
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Olds, Aaron D.; Craig, Anthony S.
2011-01-01
Ares I-X was the first test flight of NASA's Constellation Program's Ares I Crew Launch Vehicle designed to provide manned access to low Earth orbit. As a one-time test flight, the Air Force's 45th Space Wing required a series of Range Safety analysis data products to be developed for the specified launch date and mission trajectory prior to granting flight approval on the Eastern Range. The range safety data package is required to ensure that the public, launch area, and launch complex personnel and resources are provided with an acceptable level of safety and that all aspects of prelaunch and launch operations adhere to applicable public laws. The analysis data products, defined in the Air Force Space Command Manual 91-710, Volume 2, consisted of a nominal trajectory, three sigma trajectory envelopes, stage impact footprints, acoustic intensity contours, trajectory turn angles resulting from potential vehicle malfunctions (including flight software failures), characterization of potential debris, and debris impact footprints. These data products were developed under the auspices of the Constellation's Program Launch Constellation Range Safety Panel and its Range Safety Trajectory Working Group with the intent of beginning the framework for the operational vehicle data products and providing programmatic review and oversight. A multi-center NASA team in conjunction with the 45th Space Wing, collaborated within the Trajectory Working Group forum to define the data product development processes, performed the analyses necessary to generate the data products, and performed independent verification and validation of the data products. This paper outlines the Range Safety data requirements and provides an overview of the processes established to develop both the data products and the individual analyses used to develop the data products, and it summarizes the results of the analyses required for the Ares I-X launch.
Ares I-X Range Safety Analyses Overview
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Gowan, John W., Jr.; Thompson, Brian G.; Tarpley, Ashley W.
2011-01-01
Ares I-X was the first test flight of NASA's Constellation Program's Ares I Crew Launch Vehicle designed to provide manned access to low Earth orbit. As a one-time test flight, the Air Force's 45th Space Wing required a series of Range Safety analysis data products to be developed for the specified launch date and mission trajectory prior to granting flight approval on the Eastern Range. The range safety data package is required to ensure that the public, launch area, and launch complex personnel and resources are provided with an acceptable level of safety and that all aspects of prelaunch and launch operations adhere to applicable public laws. The analysis data products, defined in the Air Force Space Command Manual 91-710, Volume 2, consisted of a nominal trajectory, three sigma trajectory envelopes, stage impact footprints, acoustic intensity contours, trajectory turn angles resulting from potential vehicle malfunctions (including flight software failures), characterization of potential debris, and debris impact footprints. These data products were developed under the auspices of the Constellation's Program Launch Constellation Range Safety Panel and its Range Safety Trajectory Working Group with the intent of beginning the framework for the operational vehicle data products and providing programmatic review and oversight. A multi-center NASA team in conjunction with the 45th Space Wing, collaborated within the Trajectory Working Group forum to define the data product development processes, performed the analyses necessary to generate the data products, and performed independent verification and validation of the data products. This paper outlines the Range Safety data requirements and provides an overview of the processes established to develop both the data products and the individual analyses used to develop the data products, and it summarizes the results of the analyses required for the Ares I-X launch.
Sensor Based Framework for Secure Multimedia Communication in VANET
Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad T.; Sher, Muhammad; Kim, Tai-Hoon
2010-01-01
Secure multimedia communication enhances the safety of passengers by providing visual pictures of accidents and danger situations. In this paper we proposed a framework for secure multimedia communication in Vehicular Ad-Hoc Networks (VANETs). Our proposed framework is mainly divided into four components: redundant information, priority assignment, malicious data verification and malicious node verification. The proposed scheme jhas been validated with the help of the NS-2 network simulator and the Evalvid tool. PMID:22163462
Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations
NASA Technical Reports Server (NTRS)
Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.
2004-01-01
An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).
MESA: Message-Based System Analysis Using Runtime Verification
NASA Technical Reports Server (NTRS)
Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter
2017-01-01
In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.
NASA Langley's Formal Methods Research in Support of the Next Generation Air Transportation System
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Munoz, Cesar A.
2008-01-01
This talk will provide a brief introduction to the formal methods developed at NASA Langley and the National Institute for Aerospace (NIA) for air traffic management applications. NASA Langley's formal methods research supports the Interagency Joint Planning and Development Office (JPDO) effort to define and develop the 2025 Next Generation Air Transportation System (NGATS). The JPDO was created by the passage of the Vision 100 Century of Aviation Reauthorization Act in Dec 2003. The NGATS vision calls for a major transformation of the nation s air transportation system that will enable growth to 3 times the traffic of the current system. The transformation will require an unprecedented level of safety-critical automation used in complex procedural operations based on 4-dimensional (4D) trajectories that enable dynamic reconfiguration of airspace scalable to geographic and temporal demand. The goal of our formal methods research is to provide verification methods that can be used to insure the safety of the NGATS system. Our work has focused on the safety assessment of concepts of operation and fundamental algorithms for conflict detection and resolution (CD&R) and self- spacing in the terminal area. Formal analysis of a concept of operations is a novel area of application of formal methods. Here one must establish that a system concept involving aircraft, pilots, and ground resources is safe. The formal analysis of algorithms is a more traditional endeavor. However, the formal analysis of ATM algorithms involves reasoning about the interaction of algorithmic logic and aircraft trajectories defined over an airspace. These trajectories are described using 2D and 3D vectors and are often constrained by trigonometric relations. Thus, in many cases it has been necessary to unload the full power of an advanced theorem prover. The verification challenge is to establish that the safety-critical algorithms produce valid solutions that are guaranteed to maintain separation under all possible scenarios. Current research has assumed perfect knowledge of the location of other aircraft in the vicinity so absolute guarantees are possible, but increasingly we are relaxing the assumptions to allow incomplete, inaccurate, and/or faulty information from communication sources.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2013 CFR
2013-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2012 CFR
2012-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2011 CFR
2011-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2014 CFR
2014-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard; Bostelmann, F.
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less
Results from an Independent View on The Validation of Safety-Critical Space Systems
NASA Astrophysics Data System (ADS)
Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.
2013-08-01
The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.
Formal Verification of a Conflict Resolution and Recovery Algorithm
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey; Butler, Ricky; Geser, Alfons; Munoz, Cesar
2004-01-01
New air traffic management concepts distribute the duty of traffic separation among system participants. As a consequence, these concepts have a greater dependency and rely heavily on on-board software and hardware systems. One example of a new on-board capability in a distributed air traffic management system is air traffic conflict detection and resolution (CD&R). Traditional methods for safety assessment such as human-in-the-loop simulations, testing, and flight experiments may not be sufficient for this highly distributed system as the set of possible scenarios is too large to have a reasonable coverage. This paper proposes a new method for the safety assessment of avionics systems that makes use of formal methods to drive the development of critical systems. As a case study of this approach, the mechanical veri.cation of an algorithm for air traffic conflict resolution and recovery called RR3D is presented. The RR3D algorithm uses a geometric optimization technique to provide a choice of resolution and recovery maneuvers. If the aircraft adheres to these maneuvers, they will bring the aircraft out of conflict and the aircraft will follow a conflict-free path to its original destination. Veri.cation of RR3D is carried out using the Prototype Verification System (PVS).
Making the Hubble Space Telescope servicing mission safe
NASA Technical Reports Server (NTRS)
Bahr, N. J.; Depalo, S. V.
1992-01-01
The implementation of the HST system safety program is detailed. Numerous safety analyses are conducted through various phases of design, test, and fabrication, and results are presented to NASA management for discussion during dedicated safety reviews. Attention is given to the system safety assessment and risk analysis methodologies used, i.e., hazard analysis, fault tree analysis, and failure modes and effects analysis, and to how they are coupled with engineering and test analysis for a 'synergistic picture' of the system. Some preliminary safety analysis results, showing the relationship between hazard identification, control or abatement, and finally control verification, are presented as examples of this safety process.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
Verification and Validation Challenges for Adaptive Flight Control of Complex Autonomous Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2018-01-01
Autonomy of aerospace systems requires the ability for flight control systems to be able to adapt to complex uncertain dynamic environment. In spite of the five decades of research in adaptive control, the fact still remains that currently no adaptive control system has ever been deployed on any safety-critical or human-rated production systems such as passenger transport aircraft. The problem lies in the difficulty with the certification of adaptive control systems since existing certification methods cannot readily be used for nonlinear adaptive control systems. Research to address the notion of metrics for adaptive control began to appear in the recent years. These metrics, if accepted, could pave a path towards certification that would potentially lead to the adoption of adaptive control as a future control technology for safety-critical and human-rated production systems. Development of certifiable adaptive control systems represents a major challenge to overcome. Adaptive control systems with learning algorithms will never become part of the future unless it can be proven that they are highly safe and reliable. Rigorous methods for adaptive control software verification and validation must therefore be developed to ensure that adaptive control system software failures will not occur, to verify that the adaptive control system functions as required, to eliminate unintended functionality, and to demonstrate that certification requirements imposed by regulatory bodies such as the Federal Aviation Administration (FAA) can be satisfied. This presentation will discuss some of the technical issues with adaptive flight control and related V&V challenges.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darcy, Eric; Keyser, Matthew
The Internal Short Circuit (ISC) device enables critical battery safety verification. With the aluminum interstitial heat sink between the cells, normal trigger cells cannot be driven into thermal runaway without excessive temperature bias of adjacent cells. With an implantable, on-demand ISC device, thermal runaway tests show that the conductive heat sinks protected adjacent cells from propagation. High heat dissipation and structural support of Al heat sinks show high promise for safer, higher performing batteries.
2015-10-01
Hawaii HASP Health and Safety Plan IDA Institute for Defense Analyses IVS Instrument Verification Strip m Meter mm Millimeter MPV Man Portable...the ArcSecond laser ranger was impractical due to the requirement to maintain line-of-sight for three rovers and tedious calibration. The SERDP...within 0.1m spacing and 99% within 0.15 m Repeatability of Instrument Verification Strip (IVS) survey Amplitude of EM anomaly Amplitude of
Online Age Verification and Child Safety Act
Rep. Stupak, Bart [D-MI-1
2009-11-06
House - 11/07/2009 Referred to the Subcommittee on Communications, Technology, and the Internet. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Seismic Safety Of Simple Masonry Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guadagnuolo, Mariateresa; Faella, Giuseppe
2008-07-08
Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less
Systematic Model-in-the-Loop Test of Embedded Control Systems
NASA Astrophysics Data System (ADS)
Krupp, Alexander; Müller, Wolfgang
Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.
Design and Verification of Critical Pressurised Windows for Manned Spaceflight
NASA Astrophysics Data System (ADS)
Lamoure, Richard; Busto, Lara; Novo, Francisco; Sinnema, Gerben; Leal, Mendes M.
2014-06-01
The Window Design for Manned Spaceflight (WDMS) project was tasked with establishing the state-of-art and explore possible improvements to the current structural integrity verification and fracture control methodologies for manned spacecraft windows.A critical review of the state-of-art in spacecraft window design, materials and verification practice was conducted. Shortcomings of the methodology in terms of analysis, inspection and testing were identified. Schemes for improving verification practices and reducing conservatism whilst maintaining the required safety levels were then proposed.An experimental materials characterisation programme was defined and carried out with the support of the 'Glass and Façade Technology Research Group', at the University of Cambridge. Results of the sample testing campaign were analysed, post-processed and subsequently applied to the design of a breadboard window demonstrator.Two Fused Silica glass window panes were procured and subjected to dedicated analyses, inspection and testing comprising both qualification and acceptance programmes specifically tailored to the objectives of the activity.Finally, main outcomes have been compiled into a Structural Verification Guide for Pressurised Windows in manned spacecraft, incorporating best practices and lessons learned throughout this project.
Modelling and analysis of the sugar cataract development process using stochastic hybrid systems.
Riley, D; Koutsoukos, X; Riley, K
2009-05-01
Modelling and analysis of biochemical systems such as sugar cataract development (SCD) are critical because they can provide new insights into systems, which cannot be easily tested with experiments; however, they are challenging problems due to the highly coupled chemical reactions that are involved. The authors present a stochastic hybrid system (SHS) framework for modelling biochemical systems and demonstrate the approach for the SCD process. A novel feature of the framework is that it allows modelling the effect of drug treatment on the system dynamics. The authors validate the three sugar cataract models by comparing trajectories computed by two simulation algorithms. Further, the authors present a probabilistic verification method for computing the probability of sugar cataract formation for different chemical concentrations using safety and reachability analysis methods for SHSs. The verification method employs dynamic programming based on a discretisation of the state space and therefore suffers from the curse of dimensionality. To analyse the SCD process, a parallel dynamic programming implementation that can handle large, realistic systems was developed. Although scalability is a limiting factor, this work demonstrates that the proposed method is feasible for realistic biochemical systems.
NASA Astrophysics Data System (ADS)
Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko
Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mijnheer, B; Mans, A; Olaciregui-Ruiz, I
Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less
European Train Control System: A Case Study in Formal Verification
NASA Astrophysics Data System (ADS)
Platzer, André; Quesel, Jan-David
Complex physical systems have several degrees of freedom. They only work correctly when their control parameters obey corresponding constraints. Based on the informal specification of the European Train Control System (ETCS), we design a controller for its cooperation protocol. For its free parameters, we successively identify constraints that are required to ensure collision freedom. We formally prove the parameter constraints to be sharp by characterizing them equivalently in terms of reachability properties of the hybrid system dynamics. Using our deductive verification tool KeYmaera, we formally verify controllability, safety, liveness, and reactivity properties of the ETCS protocol that entail collision freedom. We prove that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics. We verify that safety is preserved when a PI controlled speed supervision is used.
Andreski, Michael; Myers, Megan; Gainer, Kate; Pudlo, Anthony
Determine the effects of an 18-month pilot project using tech-check-tech in 7 community pharmacies on 1) rate of dispensing errors not identified during refill prescription final product verification; 2) pharmacist workday task composition; and 3) amount of patient care services provided and the reimbursement status of those services. Pretest-posttest quasi-experimental study where baseline and study periods were compared. Pharmacists and pharmacy technicians in 7 community pharmacies in Iowa. The outcome measures were 1) percentage of technician verified refill prescriptions where dispensing errors were not identified on final product verification; 2) percentage of time spent by pharmacists in dispensing, management, patient care, practice development, and other activities; 3) the number of pharmacist patient care services provided per pharmacist hours worked; and 4) percentage of time that technician product verification was used. There was no significant difference in overall errors (0.2729% vs. 0.5124%, P = 0.513), patient safety errors (0.0525% vs. 0.0651%, P = 0.837), or administrative errors (0.2204% vs. 0.4784%, P = 0.411). Pharmacist's time in dispensing significantly decreased (67.3% vs. 49.06%, P = 0.005), and time in direct patient care (19.96% vs. 34.72%, P = 0.003), increased significantly. Time in other activities did not significantly change. Reimbursable services per pharmacist hour (0.11 vs. 0.30, P = 0.129), did not significantly change. Non-reimbursable services increased significantly (2.77 vs. 4.80, P = 0.042). Total services significantly increased (2.88 vs. 5.16, P = 0.044). Pharmacy technician product verification of refill prescriptions preserved dispensing safety while significantly increasing the time spent in delivery of pharmacist provided patient care services. The total number of pharmacist services provided per hour also increased significantly, driven primarily by a significant increase in the number of non-reimbursed services. This was mostly likely due to the increased time available to provide patient care. Reimbursed services per hour did not increase significantly mostly likely due to lack of payers. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. J. Appel
This cleanup verification package documents completion of remedial action for the 118-F-3, Minor Construction Burial Ground waste site. This site was an open field covered with cobbles, with no vegetation growing on the surface. The site received irradiated reactor parts that were removed during conversion of the 105-F Reactor from the Liquid 3X to the Ball 3X Project safety systems and received mostly vertical safety rod thimbles and step plugs.
Position verification systems for an automated highway system.
DOT National Transportation Integrated Search
2015-03-01
Automated vehicles promote road safety, fuel efficiency, and reduced travel time by decreasing traffic : congestion and driver workload. In a vehicle platoon (grouping vehicles to increase road capacity by : managing distance between vehicles using e...
Seismic verification of nuclear plant equipment anchorage, Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, R M
1991-06-01
Guidelines have been developed to evaluate the seismic adequacy of the anchorage of various classes of electrical and mechanical equipment in nuclear power plants covered by NRC Unresolved Safety Issue A-46. The guidelines consist of anchorage strength capacities as a function of key equipment and installation parameters. The strength criteria for expansion anchor bolts were developed by collecting and analyzing a large quantity of test data. The strength criteria for Cast-in-Place bolts and welds to embedded steel plates and channels were taken from existing nuclear-industry design guidelines. For anchorage used in low strength concrete and in concrete with cracks, appropriatemore » strength reduction factors were developed. Reduction factors for parameters such as edge distance, spacing and embedment depth are also included. Based on the anchorage capacity and equipment configuration, inspection checklists for field verification of anchorage adequacy were developed, and provisions for outliners that can be used to further investigate anchorages that cannot be verified in the field were prepared. The screening tables are based on an analysis of the anchorage forces developed by common equipment types and on strength criteria to quantify the holding power of anchor bolts and welds. A computer code EBAC was developed for the evaluation of the adequacy of the equipment anchorage. Guidelines to evaluate anchorage adequacy for vertical and horizontal tanks and horizontal heat exchangers were also developed.« less
Formalization of the Integral Calculus in the PVS Theorem Prover
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
2004-01-01
The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.
Technical review of SRT-CMA-930058 revalidation studies of Mark 16 experiments: J70
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, R.L.
1993-10-25
This study is a reperformance of a set of MGBS-TGAL criticality safety code validation calculations previously reported by Clark. The reperformance was needed because the records of the previous calculations could not be located in current APG files and records. As noted by the author, preliminary attempts to reproduce the Clark results by direct modeling in MGBS and TGAL were unsuccessful. Consultation with Clark indicated that the MGBS-TGAL (EXPT) option within the KOKO system should be used to set up the MGBS and TGAL input data records. The results of the study indicate that the technique used by Clark hasmore » been established and that the technique is now documented for future use. File records of the calculations have also been established in APG files. The review was performed per QAP 11--14 of 1Q34. Since the reviewer was involved in developing the procedural technique used for this study, this review can not be considered a fully independent review, but should be considered a verification that the document contains adequate information to allow a new user to perform similar calculations, a verification of the procedure by performing several calculations independently with identical results to the reported results, and a verification of the readability of the report.« less
Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)
2002-01-01
To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
Demonstration of Spacecraft Fire Safety Technology
NASA Technical Reports Server (NTRS)
Ruff, Gary A.; Urban, David L.
2012-01-01
During the Constellation Program, the development of spacecraft fire safety technologies were focused on the immediate questions related to the atmosphere of the habitable volume and implementation of fire detection, suppression, and postfire clean-up systems into the vehicle architectures. One of the difficulties encountered during the trade studies for these systems was the frequent lack of data regarding the performance of a technology, such as a water mist fire suppression system or an optically-based combustion product monitor. Even though a spacecraft fire safety technology development project was being funded, there was insufficient time and funding to address all the issues as they were identified. At the conclusion of the Constellation Program, these knowledge gaps formed the basis for a project proposed to the Advanced Exploration Systems (AES) Program. This project, subsequently funded by the AES Program and in operation since October 2011, has as its cornerstone the development of an experiment to be conducted on an ISS resupply vehicle, such as the European Space Agency (ESA) Automated Transfer Vehicle (ATV) or Orbital Science s Cygnus vehicle after it leaves the ISS and before it enters the atmosphere. The technology development efforts being conducted in this project include continued quantification of low- and partial-gravity maximum oxygen concentrations of spacecraft-relevant materials, development and verification of sensors for fire detection and post-fire monitoring, development of standards for sizing and selecting spacecraft fire suppression systems, and demonstration of post-fire cleanup strategies. The major technology development efforts are identified in this paper but its primary purpose is to describe the spacecraft fire safety demonstration being planned for the reentry vehicle.
Formal verification of an avionics microprocessor
NASA Technical Reports Server (NTRS)
Srivas, Mandayam, K.; Miller, Steven P.
1995-01-01
Formal specification combined with mechanical verification is a promising approach for achieving the extremely high levels of assurance required of safety-critical digital systems. However, many questions remain regarding their use in practice: Can these techniques scale up to industrial systems, where are they likely to be useful, and how should industry go about incorporating them into practice? This report discusses a project undertaken to answer some of these questions, the formal verification of the AAMPS microprocessor. This project consisted of formally specifying in the PVS language a rockwell proprietary microprocessor at both the instruction-set and register-transfer levels and using the PVS theorem prover to show that the microcode correctly implemented the instruction-level specification for a representative subset of instructions. Notable aspects of this project include the use of a formal specification language by practicing hardware and software engineers, the integration of traditional inspections with formal specifications, and the use of a mechanical theorem prover to verify a portion of a commercial, pipelined microprocessor that was not explicitly designed for formal verification.
Study of techniques for redundancy verification without disrupting systems, phases 1-3
NASA Technical Reports Server (NTRS)
1970-01-01
The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.
Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks
NASA Technical Reports Server (NTRS)
Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris
2015-01-01
Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.
78 FR 45729 - Foreign Supplier Verification Programs for Importers of Food for Humans and Animals
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
...The Food and Drug Administration (FDA) is proposing to adopt regulations on foreign supplier verification programs (FSVPs) for importers of food for humans and animals. The proposed regulations would require importers to help ensure that food imported into the United States is produced in compliance with processes and procedures, including reasonably appropriate risk-based preventive controls, that provide the same level of public health protection as those required under the hazard analysis and risk-based preventive controls and standards for produce safety sections of the Federal Food, Drug, and Cosmetic Act (the FD&C Act), is not adulterated, and is not misbranded with respect to food allergen labeling. We are proposing these regulations in accordance with the FDA Food Safety Modernization Act (FSMA). The proposed regulations would help ensure that imported food is produced in a manner consistent with U.S. standards.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
An RFID solution for enhancing inpatient medication safety with real-time verifiable grouping-proof.
Chen, Yu-Yi; Tsai, Meng-Lin
2014-01-01
The occurrence of a medication error can threaten patient safety. The medication administration process is complex and cumbersome, and nursing staffs are prone to error when they are tired. Proper Information Technology (IT) can assist the nurse in correct medication administration. We review a recent proposal regarding a leading-edge solution to enhance inpatient medication safety by using RFID technology. The proof mechanism is the kernel concept in their design and worth studying to develop a well-designed grouping-proof scheme. Other RFID grouping-proof protocols could be similarly applied in administering physician orders. We improve on the weaknesses of previous works and develop a reading-order independent RFID grouping-proof scheme in this paper. In our scheme, tags are queried and verified under the direct control of the authorized reader without connecting to the back-end database server. Immediate verification in our design makes this application more portable and efficient and critical security issues have been analyzed by the threat model. Our scheme is suitable for the safe drug administration scenario and the drug package scenario in a hospital environment to enhance inpatient medication safety. It automatically checks for correct drug unit-dose and appropriate inpatient treatments. Copyright © 2013. Published by Elsevier Ireland Ltd.
Loads and low frequency dynamics data base: Version 1.1 November 8, 1985. [Space Shuttles
NASA Technical Reports Server (NTRS)
Garba, J. A. (Editor)
1985-01-01
Structural design data for the Shuttle are presented in the form of a data base. The data can be used by designers of Shuttle experiments to assure compliance with Shuttle safety and structural verification requirements. A glossary of Shuttle design terminology is given, and the principal safety requirements of Shuttle are summarized. The Shuttle design data are given in the form of load factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Formal Verification of the Runway Safety Monitor
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu; Ciardo, Gianfranco
2006-01-01
The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce runway accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems.
NASA GSFC Mechanical Engineering Latest Inputs for Verification Standards (GEVS) Updates
NASA Technical Reports Server (NTRS)
Kaufman, Daniel
2003-01-01
This viewgraph presentation provides information on quality control standards in mechanical engineering. The presentation addresses safety, structural loads, nonmetallic composite structural elements, bonded structural joints, externally induced shock, random vibration, acoustic tests, and mechanical function.
Control of embankment settlement field verification on PCPT prediction methods : tech summary.
DOT National Transportation Integrated Search
2011-07-01
Depending on loading and embankment height, the magnitude and progression of settlement can signifi cantly impact the safety and : serviceability of the infrastructures that are constructed on saturated fi ne-grained soils. Therefore, the constructio...
Advances in Monte-Carlo code TRIPOLI-4®'s treatment of the electromagnetic cascade
NASA Astrophysics Data System (ADS)
Mancusi, Davide; Bonin, Alice; Hugot, François-Xavier; Malouch, Fadhel
2018-01-01
TRIPOLI-4® is a Monte-Carlo particle-transport code developed at CEA-Saclay (France) that is employed in the domains of nuclear-reactor physics, criticality-safety, shielding/radiation protection and nuclear instrumentation. The goal of this paper is to report on current developments, validation and verification made in TRIPOLI-4 in the electron/positron/photon sector. The new capabilities and improvements concern refinements to the electron transport algorithm, the introduction of a charge-deposition score, the new thick-target bremsstrahlung option, the upgrade of the bremsstrahlung model and the improvement of electron angular straggling at low energy. The importance of each of the developments above is illustrated by comparisons with calculations performed with other codes and with experimental data.
Evaluation of Mesoscale Model Phenomenological Verification Techniques
NASA Technical Reports Server (NTRS)
Lambert, Winifred
2006-01-01
Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111
The politics of verification and the control of nuclear tests, 1945-1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, N.W.
1990-01-01
This dissertation addresses two questions: (1) why has agreement been reached on verification regimes to support some arms control accords but not others; and (2) what determines the extent to which verification arrangements promote stable cooperation. This study develops an alternative framework for analysis by examining the politics of verification at two levels. The logical politics of verification are shaped by the structure of the problem of evaluating cooperation under semi-anarchical conditions. The practical politics of verification are driven by players' attempts to use verification arguments to promote their desired security outcome. The historical material shows that agreements on verificationmore » regimes are reached when key domestic and international players desire an arms control accord and believe that workable verification will not have intolerable costs. Clearer understanding of how verification is itself a political problem, and how players manipulate it to promote other goals is necessary if the politics of verification are to support rather than undermine the development of stable cooperation.« less
Requirements, Verification, and Compliance (RVC) Database Tool
NASA Technical Reports Server (NTRS)
Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale
2001-01-01
This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".
NASA-STD-7009 Guidance Document for Human Health and Performance Models and Simulations
NASA Technical Reports Server (NTRS)
Walton, Marlei; Mulugeta, Lealem; Nelson, Emily S.; Myers, Jerry G.
2014-01-01
Rigorous verification, validation, and credibility (VVC) processes are imperative to ensure that models and simulations (MS) are sufficiently reliable to address issues within their intended scope. The NASA standard for MS, NASA-STD-7009 (7009) [1] was a resultant outcome of the Columbia Accident Investigation Board (CAIB) to ensure MS are developed, applied, and interpreted appropriately for making decisions that may impact crew or mission safety. Because the 7009 focus is engineering systems, a NASA-STD-7009 Guidance Document is being developed to augment the 7009 and provide information, tools, and techniques applicable to the probabilistic and deterministic biological MS more prevalent in human health and performance (HHP) and space biomedical research and operations.
NASA Technical Reports Server (NTRS)
Lee, Hyung B.; Ghia, Urmila; Bayyuk, Sami; Oberkampf, William L.; Roy, Christopher J.; Benek, John A.; Rumsey, Christopher L.; Powers, Joseph M.; Bush, Robert H.; Mani, Mortaza
2016-01-01
Computational fluid dynamics (CFD) and other advanced modeling and simulation (M&S) methods are increasingly relied on for predictive performance, reliability and safety of engineering systems. Analysts, designers, decision makers, and project managers, who must depend on simulation, need practical techniques and methods for assessing simulation credibility. The AIAA Guide for Verification and Validation of Computational Fluid Dynamics Simulations (AIAA G-077-1998 (2002)), originally published in 1998, was the first engineering standards document available to the engineering community for verification and validation (V&V) of simulations. Much progress has been made in these areas since 1998. The AIAA Committee on Standards for CFD is currently updating this Guide to incorporate in it the important developments that have taken place in V&V concepts, methods, and practices, particularly with regard to the broader context of predictive capability and uncertainty quantification (UQ) methods and approaches. This paper will provide an overview of the changes and extensions currently underway to update the AIAA Guide. Specifically, a framework for predictive capability will be described for incorporating a wide range of error and uncertainty sources identified during the modeling, verification, and validation processes, with the goal of estimating the total prediction uncertainty of the simulation. The Guide's goal is to provide a foundation for understanding and addressing major issues and concepts in predictive CFD. However, this Guide will not recommend specific approaches in these areas as the field is rapidly evolving. It is hoped that the guidelines provided in this paper, and explained in more detail in the Guide, will aid in the research, development, and use of CFD in engineering decision-making.
Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael
2014-05-01
Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.
NASA Technical Reports Server (NTRS)
Fitz, Rhonda; Whitman, Gerek
2016-01-01
Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.
NASA's Commercial Crew Program, the Next Step in U.S. Space Transportation
NASA Technical Reports Server (NTRS)
Mango, Edward J., Jr.
2013-01-01
The Commercial Crew Program (CCP) is leading NASA's efforts to develop the next U.S. capability for crew transportation and rescue services to and from the International Space Station (ISS) by the middecade timeframe. The outcome of this capability is expected to stimulate and expand the U.S. space transportation industry. NASA is relying on its decades of human space flight experience to certify U.S. crewed vehicles to the ISS and is doing so in a two phase certification approach. NASA certification will cover all aspects of a crew transportation system, including: Development, test, evaluation, and verification. Program management and control. Flight readiness certification. Launch, landing, recovery, and mission operations. Sustaining engineering and maintenance/upgrades. To ensure NASA crew safety, NASA certification will validate technical and performance requirements, verify compliance with NASA requirements, validate that the crew transportation system operates in the appropriate environments, and quantify residual risks. The Commercial Crew Program will present progress to date and how it manages safety and reduces risk.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
Software IV and V Research Priorities and Applied Program Accomplishments Within NASA
NASA Technical Reports Server (NTRS)
Blazy, Louis J.
2000-01-01
The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering
SAFEGUARD: An Assured Safety Net Technology for UAS
NASA Technical Reports Server (NTRS)
Dill, Evan T.; Young, Steven D.; Hayhurst, Kelly J.
2016-01-01
As demands increase to use unmanned aircraft systems (UAS) for a broad spectrum of commercial applications, regulatory authorities are examining how to safely integrate them without loss of safety or major disruption to existing airspace operations. This work addresses the development of the Safeguard system as an assured safety net technology for UAS. The Safeguard system monitors and enforces conformance to a set of rules defined prior to flight (e.g., geospatial stay-out or stay-in regions, speed limits, altitude limits). Safeguard operates independently of the UAS autopilot and is strategically designed in a way that can be realized by a small set of verifiable functions to simplify compliance with regulatory standards for commercial aircraft. A framework is described that decouples the system from any other devices on the UAS as well as introduces complementary positioning source(s) for applications that require integrity and availability beyond what the Global Positioning System (GPS) can provide. Additionally, the high level logic embedded within the software is presented, as well as the steps being taken toward verification and validation (V&V) of proper functionality. Next, an initial prototype implementation of the described system is disclosed. Lastly, future work including development, testing, and system V&V is summarized.
From a Viewpoint of Clinical Settings: Pharmacoepidemiology as Reverse Translational Research (rTR).
Kawakami, Junichi
2017-01-01
Clinical pharmacology and pharmacoepidemiology research may converge in practise. Pharmacoepidemiology is the study of pharmacotherapy and risk management in patient groups. For many drugs, adverse reaction(s) that were not seen and/or clarified during research and development stages have been reported in the real world. Pharmacoepidemiology can detect and verify adverse drug reactions as reverse translational research. Recently, development and effective use of medical information databases (MID) have been conducted in Japan and elsewhere for the purpose of post-marketing safety of drugs. The Ministry of Health, Labour and Welfare, Japan has been promoting the development of 10-million scale database in 10 hospitals and hospital groups as "the infrastructure project of medical information database (MID-NET)". This project enables estimation of the frequency of adverse reactions, the distinction between drug-induced reactions and basal health-condition changes, and usefulness verification of administrative measures of drug safety. However, because the database information is different from detailed medical records, construction of methodologies for the detection and evaluation of adverse reactions is required. We have been performing database research using medical information system in some hospitals to establish and demonstrate useful methods for post-marketing safety. In this symposium, we aim to discuss the possibility of reverse translational research from clinical settings and provide an introduction to our research.
Sigma Metrics Across the Total Testing Process.
Charuruks, Navapun
2017-03-01
Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-1 General. (a) All automatically or... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features...
NASA Technical Reports Server (NTRS)
1989-01-01
The design and verification requirements are defined which are appropriate to hardware at the detail, subassembly, component, and engine levels and to correlate these requirements to the development demonstrations which provides verification that design objectives are achieved. The high pressure fuel turbopump requirements verification matrix provides correlation between design requirements and the tests required to verify that the requirement have been met.
Quantitative safety assessment of air traffic control systems through system control capacity
NASA Astrophysics Data System (ADS)
Guo, Jingjing
Quantitative Safety Assessments (QSA) are essential to safety benefit verification and regulations of developmental changes in safety critical systems like the Air Traffic Control (ATC) systems. Effectiveness of the assessments is particularly desirable today in the safe implementations of revolutionary ATC overhauls like NextGen and SESAR. QSA of ATC systems are however challenged by system complexity and lack of accident data. Extending from the idea "safety is a control problem" in the literature, this research proposes to assess system safety from the control perspective, through quantifying a system's "control capacity". A system's safety performance correlates to this "control capacity" in the control of "safety critical processes". To examine this idea in QSA of the ATC systems, a Control-capacity Based Safety Assessment Framework (CBSAF) is developed which includes two control capacity metrics and a procedural method. The two metrics are Probabilistic System Control-capacity (PSC) and Temporal System Control-capacity (TSC); each addresses an aspect of a system's control capacity. And the procedural method consists three general stages: I) identification of safety critical processes, II) development of system control models and III) evaluation of system control capacity. The CBSAF was tested in two case studies. The first one assesses an en-route collision avoidance scenario and compares three hypothetical configurations. The CBSAF was able to capture the uncoordinated behavior between two means of control, as was observed in a historic midair collision accident. The second case study compares CBSAF with an existing risk based QSA method in assessing the safety benefits of introducing a runway incursion alert system. Similar conclusions are reached between the two methods, while the CBSAF has the advantage of simplicity and provides a new control-based perspective and interpretation to the assessments. The case studies are intended to investigate the potential and demonstrate the utilities of CBSAF and are not intended for thorough studies of collision avoidance and runway incursions safety, which are extremely challenging problems. Further development and thorough validations are required to allow CBSAF to reach implementation phases, e.g. addressing the issues of limited scalability and subjectivity.
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
Video Vehicle Detector Verification System (V2DVS) operators manual and project final report.
DOT National Transportation Integrated Search
2012-03-01
The accurate detection of the presence, speed and/or length of vehicles on roadways is recognized as critical for : effective roadway congestion management and safety. Vehicle presence sensors are commonly used for traffic : volume measurement and co...
The use of robots for arms control treaty verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalowski, S.J.
1991-01-01
Many aspects of the superpower relationship now present a new set of challenges and opportunities, including the vital area of arms control. This report addresses one such possibility: the use of robots for the verification of arms control treaties. The central idea of this report is far from commonly-accepted. In fact, it was only encountered once in bibliographic review phase of the project. Nonetheless, the incentive for using robots is simple and coincides with that of industrial applications: to replace or supplement human activity in the performance of tasks for which human participation is unnecessary, undesirable, impossible, too dangerous ormore » too expensive. As in industry, robots should replace workers (in this case, arms control inspectors) only when questions of efficiency, reliability, safety, security and cost-effectiveness have been answered satisfactorily. In writing this report, it is not our purpose to strongly advocate the application of robots in verification. Rather, we wish to explore the significant aspects, pro and con, of applying experience from the field of flexible automation to the complex task of assuring arms control treaty compliance. We want to establish a framework for further discussion of this topic and to define criteria for evaluating future proposals. The authors' expertise is in robots, not arms control. His practical experience has been in developing systems for use in the rehabilitation of severely disabled persons (such as quadriplegics), who can use robots for assistance during activities of everyday living, as well as in vocational applications. This creates a special interest in implementations that, in some way, include a human operator in the control scheme of the robot. As we hope to show in this report, such as interactive systems offer the greatest promise of making a contribution to the challenging problems of treaty verification. 15 refs.« less
Towards the formal verification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
The formal verification of the design and partial requirements for a Processor Interface Unit (PIU) using the Higher Order Logic (HOL) theorem-proving system is described. The processor interface unit is a single-chip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. It provides the opportunity to investigate the specification and verification of a real-world subsystem within a commercially-developed fault-tolerant computer. An overview of the PIU verification effort is given. The actual HOL listing from the verification effort are documented in a companion NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings' including the general-purpose HOL theories and definitions that support the PIU verification as well as tactics used in the proofs.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
NASA Technical Reports Server (NTRS)
Morris, Michelle L.
1996-01-01
NASA Langley Research Center (LARC) investigated several alternatives to the use of tri-chloro-tri-fluoroethane(CFC-113) in oxygen cleaning and verification. Alternatives investigated include several replacement solvents, Non-Destructive Evaluation (NDE) and Total Organic Carbon (TOC) analysis. Among the solvents, 1, 1-dichloro-1-fluoroethane (HCFC 141b) and di-chloro-penta-fluoro-propane (HCFC 225) are the most suitable alternatives for cleaning and verification. However, use of HCFC 141b is restricted, HCFC 225 introduces toxicity hazards, and the NDE and TOC methods of verification are not suitable for processes at LaRC. Therefore, the interim recommendation is to sparingly use CFC-113 for the very difficult cleaning tasks where safety is critical and to use HCFC 225 to clean components in a controlled laboratory environment. Meanwhile, evaluation must continue on now solvents and procedures to find one suited to LaRCs oxygen cleaning needs.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
Guideline Implementation: Surgical Smoke Safety.
Fencl, Jennifer L
2017-05-01
Research conducted during the past four decades has demonstrated that surgical smoke generated from the use of energy-generating devices in surgery contains toxic and biohazardous substances that present risks to perioperative team members and patients. Despite the increase in information available, however, perioperative personnel continue to demonstrate a lack of knowledge of these hazards and lack of compliance with recommendations for evacuating smoke during surgical procedures. The new AORN "Guideline for surgical smoke safety" provides guidance on surgical smoke management. This article focuses on key points of the guideline to help perioperative personnel promote smoke-free work environments; evacuate surgical smoke; and develop education programs and competency verification tools, policies and procedures, and quality improvement initiatives related to controlling surgical smoke. Perioperative RNs should review the complete guideline for additional information and for guidance when writing and updating policies and procedures. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.
30 CFR 250.1506 - How often must I train my employees?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 250.1506 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT... the knowledge and skills that employees need to perform their assigned well control, deepwater well... periodic training and verification of well control, deepwater well control, or production safety knowledge...
TQAP for Verification of Qualitative Lead Test Kits
There are lead-based paint test kits available to help home owners and contractors identify lead-based paint hazards before any Renovation, Repair, and Painting (RRP) activities take place so that proper health and safety meaures can be enacted. However, many of these test kits ...
DOT National Transportation Integrated Search
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...
Integrated vehicle-based safety systems heavy-truck on-road test report
DOT National Transportation Integrated Search
2008-08-01
This report presents results from a series of on-road verification tests performed to determine the readiness of a prototype : integrated warning system to advance to field testing, as well as to identify areas of system performance that should be im...
Integrated vehicle-based safety systems light-vehicle on-road test report
DOT National Transportation Integrated Search
2008-08-01
This report presents results from a series of on-road verification tests performed to determine the readiness of a prototype : integrated warning system to advance to field testing, as well as to identify areas of system performance that should be im...
Space transportation system payload interface verification
NASA Technical Reports Server (NTRS)
Everline, R. T.
1977-01-01
The paper considers STS payload-interface verification requirements and the capability provided by STS to support verification. The intent is to standardize as many interfaces as possible, not only through the design, development, test and evaluation (DDT and E) phase of the major payload carriers but also into the operational phase. The verification process is discussed in terms of its various elements, such as the Space Shuttle DDT and E (including the orbital flight test program) and the major payload carriers DDT and E (including the first flights). Five tools derived from the Space Shuttle DDT and E are available to support the verification process: mathematical (structural and thermal) models, the Shuttle Avionics Integration Laboratory, the Shuttle Manipulator Development Facility, and interface-verification equipment (cargo-integration test equipment).
24 CFR 960.259 - Family information and verification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Family information and verification... URBAN DEVELOPMENT ADMISSION TO, AND OCCUPANCY OF, PUBLIC HOUSING Rent and Reexamination § 960.259 Family information and verification. (a) Family obligation to supply information. (1) The family must supply any...
The Learner Verification of Series r: The New Macmillan Reading Program; Highlights.
ERIC Educational Resources Information Center
National Evaluation Systems, Inc., Amherst, MA.
National Evaluation Systems, Inc., has developed curriculum evaluation techniques, in terms of learner verification, which may be used to help the curriculum-development efforts of publishing companies, state education departments, and universities. This document includes a summary of the learner-verification approach, with data collected about a…
24 CFR 960.259 - Family information and verification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Family information and verification... URBAN DEVELOPMENT ADMISSION TO, AND OCCUPANCY OF, PUBLIC HOUSING Rent and Reexamination § 960.259 Family information and verification. (a) Family obligation to supply information. (1) The family must supply any...
NASA Astrophysics Data System (ADS)
Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.
2013-08-01
Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.
NASA Technical Reports Server (NTRS)
1992-01-01
This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.
Interpreter composition issues in the formal verification of a processor-memory module
NASA Technical Reports Server (NTRS)
Fura, David A.; Cohen, Gerald C.
1994-01-01
This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.
Verification and Validation Studies for the LAVA CFD Solver
NASA Technical Reports Server (NTRS)
Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.
2013-01-01
The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
45 CFR 95.626 - Independent Verification and Validation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...
45 CFR 95.626 - Independent Verification and Validation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...
45 CFR 95.626 - Independent Verification and Validation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...
45 CFR 95.626 - Independent Verification and Validation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...
24 CFR 5.512 - Verification of eligible immigration status.
Code of Federal Regulations, 2010 CFR
2010-04-01
... immigration status. 5.512 Section 5.512 Housing and Urban Development Office of the Secretary, Department of... Noncitizens § 5.512 Verification of eligible immigration status. (a) General. Except as described in paragraph...) Primary verification—(1) Automated verification system. Primary verification of the immigration status of...
Safe use of electronic health records and health information technology systems: trust but verify.
Denham, Charles R; Classen, David C; Swenson, Stephen J; Henderson, Michael J; Zeltner, Thomas; Bates, David W
2013-12-01
We will provide a context to health information technology systems (HIT) safety hazards discussions, describe how electronic health record-computer prescriber order entry (EHR-CPOE) simulation has already identified unrecognized hazards in HIT on a national scale, helping make EHR-CPOE systems safer, and we make the case for all stakeholders to leverage proven methods and teams in HIT performance verification. A national poll of safety, quality improvement, and health-care administrative leaders identified health information technology safety as the hazard of greatest concern for 2013. Quality, HIT, and safety leaders are very concerned about technology performance risks as addressed in the Health Information Technology and Patient Safety report of the Institute of Medicine; and these are being addressed by the Office of the National Coordinator of HIT of the U.S. Dept. of Human Services in their proposed plans. We describe the evolution of postdeployment testing of HIT performance, including the results of national deployment of Texas Medical Institute of Technology's electronic health record computer prescriber order entry (TMIT EHR-CPOE) Flight Simulator verification test that is addressed in these 2 reports, and the safety hazards of concern to leaders. A global webinar for health-care leaders addressed the top patient safety hazards in the areas of leadership, practices, and technologies. A poll of 76 of the 221 organizations participating in the webinar revealed that HIT hazards were the participants' greatest concern of all 30 hazards presented. Of those polled, 89% rated HIT patient/data mismatches in EHRs and HIT systems as a 9 or 10 on a scale of 1 to 10 as a hazard of great concern. Review of a key study of postdeployment testing of the safety performance of operational EHR systems with CPOE implemented in 62 hospitals, using the TMIT EHR-CPOE simulation tool, showed that only 53% of the medication orders that could have resulted in fatalities were detected. The study also showed significant variability in the performance of specific EHR vendor systems, with the same vendor product scoring as high as a 75% detection score in one health-care organization, and the same vendor system scoring below 10% in another health-care organization. HIT safety hazards should be taken very seriously, and the need for proven, robust, and regular postdeployment performance verification measurement of EHR system operations in every health-care organization is critical to ensure that these systems are safe for every patient. The TMIT EHR-CPOE flight simulator is a well-tested and scalable tool that can be used to identify performance gaps in EHR and other HIT systems. It is critical that suppliers, providers, and purchasers of health-care partner with HIT stakeholders and leverage the existing body of work, as well as expert teams and collaborative networks to make care safer; and public-private partnerships to accelerate safety in HIT. A global collaborative is already underway incorporating a "trust but verify" philosophy.
Bioluminescence lights the way to food safety
NASA Astrophysics Data System (ADS)
Brovko, Lubov Y.; Griffiths, Mansel W.
2003-07-01
The food industry is increasingly adopting food safety and quality management systems that are more proactive and preventive than those used in the past which have tended to rely on end product testing and visual inspection. The regulatory agencies in many countries are promoting one such management tool, Hazard Analysis Critical Control Point (HACCP), as a way to achieve a safer food supply and as a basis for harmonization of trading standards. Verification that the process is safe must involve microbiological testing but the results need not be generated in real-time. Of all the rapid microbiological tests currently available, the only ones that come close to offering real-time results are bioluminescence-based methods. Recent developments in application of bioluminescence for food safety issues are presented in the paper. These include the use of genetically engineered microorganisms with bioluminescent and fluorescent phenotypes as a real time indicator of physiological state and survival of food-borne pathogens in food and food processing environments as well as novel bioluminescent-based methods for rapid detection of pathogens in food and environmental samples. Advantages and pitfalls of the methods are discussed.
! Boating Safety Beach Hazards Rip Currents Hypothermia Hurricanes Thunderstorms Lightning Coastal Flooding , Verification Richard May 301-427-9378 301-713-1520 FAX richard.may@noaa.gov Coastal Weather, Great Lakes, Ice operational nature relating to near shore and coastal forecasts, contact your local National Weather Service
Sandia National Laboratories: Directed-energy tech receives funding to
Accomplishments Energy Stationary Power Earth Science Transportation Energy Energy Research Global Security WMD & Figures Programs Nuclear Weapons About Nuclear Weapons Safety & Security Weapons Science & Cyber & Infrastructure Security Global Security Remote Sensing & Verification Research Research
Built-in-Test Verification Techniques
1987-02-01
report documents the results of the effort for the Rome Air Development Center Contract F30602-84-C-0021, BIT Verification Techniques. The work was...Richard Spillman of Sp.,llman Research Associates. The principal investigators were Mike Partridge and subsequently Jeffrey Albert. The contract was...two your effort to develop techniques for Built-In Test (BIT) verification. The objective of the contract was to develop specifications and technical
NASA Technical Reports Server (NTRS)
1986-01-01
Activities that will be conducted in support of the development and verification of the Block 2 Solid Rocket Motor (SRM) are described. Development includes design, fabrication, processing, and testing activities in which the results are fed back into the project. Verification includes analytical and test activities which demonstrate SRM component/subassembly/assembly capability to perform its intended function. The management organization responsible for formulating and implementing the verification program is introduced. It also identifies the controls which will monitor and track the verification program. Integral with the design and certification of the SRM are other pieces of equipment used in transportation, handling, and testing which influence the reliability and maintainability of the SRM configuration. The certification of this equipment is also discussed.
24 CFR 985.3 - Indicators, HUD verification methods and ratings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicators, HUD verification..., HUD verification methods and ratings. This section states the performance indicators that are used to assess PHA Section 8 management. HUD will use the verification method identified for each indicator in...
Verification test report on a solar heating and hot water system
NASA Technical Reports Server (NTRS)
1978-01-01
Information is provided on the development, qualification and acceptance verification of commercial solar heating and hot water systems and components. The verification includes the performances, the efficiences and the various methods used, such as similarity, analysis, inspection, test, etc., that are applicable to satisfying the verification requirements.
24 CFR 1000.128 - Is income verification required for assistance under NAHASDA?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Is income verification required for assistance under NAHASDA? 1000.128 Section 1000.128 Housing and Urban Development Regulations Relating to... § 1000.128 Is income verification required for assistance under NAHASDA? (a) Yes, the recipient must...
NASA Technical Reports Server (NTRS)
Oesch, Christopher; Dick, Brandon; Rupp, Timothy
2015-01-01
The development of highly complex and advanced actuation systems to meet customer demands has accelerated as the use of real-time testing technology expands into multiple markets at Moog. Systems developed for the autonomous docking of human rated spacecraft to the International Space Station (ISS), envelope multi-operational characteristics which place unique constraints on an actuation system. Real-time testing hardware has been used as a platform for incremental testing and development for the linear actuation system which controls initial capture and docking for vehicles visiting the ISS. This presentation will outline the role of dSPACE hardware as a platform for rapid control-algorithm prototyping as well as an Electromechanical Actuator (EMA) system dynamic loading simulator, both conducted at Moog to develop the safety critical Linear Actuator System (LAS) of the NASA Docking System (NDS).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-18
...: Proposed Rules on Foreign Supplier Verification Programs and the Accreditation of Third-Party Auditors... Accreditation of Third-Party Auditors/Certification Bodies would strengthen the quality, objectivity, and... public can review the proposals on FSVP and the Accreditation of Third-Party Auditors/ Certification...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-16
...: Proposed Rules on Foreign Supplier Verification Programs and the Accreditation of Third-Party Auditors... Accreditation of Third-Party Auditors/Certification Bodies would strengthen the quality, objectivity, and... that the public can review the proposals on FSVP and the Accreditation of Third-Party Auditors...
77 FR 9888 - Shiga Toxin-Producing Escherichia coli
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... Toxin-Producing Escherichia coli in Certain Raw Beef Products AGENCY: Food Safety and Inspection Service... routine verification sampling and testing for raw beef manufacturing trimmings for six non-O157 Shiga... announced in September 2011 plans to test certain raw beef products for these six STEC serogroups in...
Code of Federal Regulations, 2010 CFR
2010-01-01
..., procedures, and other arrangements that control reasonably foreseeable risks to customers or to the safety... other suspicious activity related to, a covered account; and (5) Notice from customers, victims of... policies and procedures regarding identification and verification set forth in the Customer Identification...
Code of Federal Regulations, 2011 CFR
2011-04-01
... processor shall verify that the HACCP plan is adequate to control food safety hazards that are reasonably... minimum: (1) Reassessment of the HACCP plan. A reassessment of the adequacy of the HACCP plan whenever any changes occur that could affect the hazard analysis or alter the HACCP plan in any way or at least...
Code of Federal Regulations, 2010 CFR
2010-04-01
... processor shall verify that the HACCP plan is adequate to control food safety hazards that are reasonably... minimum: (1) Reassessment of the HACCP plan. A reassessment of the adequacy of the HACCP plan whenever any changes occur that could affect the hazard analysis or alter the HACCP plan in any way or at least...
NASA Technical Reports Server (NTRS)
Gavin, Thomas R.
2006-01-01
This viewgraph presentation reviews the many parts of the JPL mission planning process that the project manager has to work with. Some of them are: NASA & JPL's institutional requirements, the mission systems design requirements, the science interactions, the technical interactions, financial requirements, verification and validation, safety and mission assurance, and independent assessment, review and reporting.
A Tool for Intersecting Context-Free Grammars and Its Applications
NASA Technical Reports Server (NTRS)
Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.
2015-01-01
This paper describes a tool for intersecting context-free grammars. Since this problem is undecidable the tool follows a refinement-based approach and implements a novel refinement which is complete for regularly separable grammars. We show its effectiveness for safety verification of recursive multi-threaded programs.
30 CFR 250.1506 - How often must I train my employees?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Section 250.1506 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE OIL...) Establish procedures to verify adequate retention of the knowledge and skills that employees need to perform... programs provide for periodic training and verification of well control or production safety knowledge and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Limitations in learning: How treatment verifications fail and what to do about it?
Richardson, Susan; Thomadsen, Bruce
The purposes of this study were: to provide dialog on why classic incident learning systems have been insufficient for patient safety improvements, discuss failures in treatment verification, and to provide context to the reasons and lessons that can be learned from these failures. Historically, incident learning in brachytherapy is performed via database mining which might include reading of event reports and incidents followed by incorporating verification procedures to prevent similar incidents. A description of both classic event reporting databases and current incident learning and reporting systems is given. Real examples of treatment failures based on firsthand knowledge are presented to evaluate the effectiveness of verification. These failures will be described and analyzed by outlining potential pitfalls and problems based on firsthand knowledge. Databases and incident learning systems can be limited in value and fail to provide enough detail for physicists seeking process improvement. Four examples of treatment verification failures experienced firsthand by experienced brachytherapy physicists are described. These include both underverification and oververification of various treatment processes. Database mining is an insufficient method to affect substantial improvements in the practice of brachytherapy. New incident learning systems are still immature and being tested. Instead, a new method of shared learning and implementation of changes must be created. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Using computer graphics to enhance astronaut and systems safety
NASA Technical Reports Server (NTRS)
Brown, J. W.
1985-01-01
Computer graphics is being employed at the NASA Johnson Space Center as a tool to perform rapid, efficient and economical analyses for man-machine integration, flight operations development and systems engineering. The Operator Station Design System (OSDS), a computer-based facility featuring a highly flexible and versatile interactive software package, PLAID, is described. This unique evaluation tool, with its expanding data base of Space Shuttle elements, various payloads, experiments, crew equipment and man models, supports a multitude of technical evaluations, including spacecraft and workstation layout, definition of astronaut visual access, flight techniques development, cargo integration and crew training. As OSDS is being applied to the Space Shuttle, Orbiter payloads (including the European Space Agency's Spacelab) and future space vehicles and stations, astronaut and systems safety are being enhanced. Typical OSDS examples are presented. By performing physical and operational evaluations during early conceptual phases. supporting systems verification for flight readiness, and applying its capabilities to real-time mission support, the OSDS provides the wherewithal to satisfy a growing need of the current and future space programs for efficient, economical analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Phillip A.; O'Hagan, Ryan; Shumaker, Brent
The Advanced Test Reactor (ATR) has always had a comprehensive procedure to verify the performance of its critical transmitters and sensors, including RTDs, and pressure, level, and flow transmitters. These transmitters and sensors have been periodically tested for response time and calibration verification to ensure accuracy. With implementation of online monitoring techniques at ATR, the calibration verification and response time testing of these transmitters and sensors are verified remotely, automatically, hands off, include more portions of the system, and can be performed at almost any time during process operations. The work was done under a DOE funded SBIR project carriedmore » out by AMS. As a result, ATR is now able to save the manpower that has been spent over the years on manual calibration verification and response time testing of its temperature and pressure sensors and refocus those resources towards more equipment reliability needs. More importantly, implementation of OLM will help enhance the overall availability, safety, and efficiency. Together with equipment reliability programs of ATR, the integration of OLM will also help with I&C aging management goals of the Department of Energy and long-time operation of ATR.« less
Verification and Planning Based on Coinductive Logic Programming
NASA Technical Reports Server (NTRS)
Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal
2008-01-01
Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.
Simulation environment based on the Universal Verification Methodology
NASA Astrophysics Data System (ADS)
Fiergolski, A.
2017-01-01
Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.
NASA Technical Reports Server (NTRS)
2002-01-01
The NASA/Navy Benchmarking Exchange (NNBE) was undertaken to identify practices and procedures and to share lessons learned in the Navy's submarine and NASA's human space flight programs. The NNBE focus is on safety and mission assurance policies, processes, accountability, and control measures. This report is an interim summary of activity conducted through October 2002, and it coincides with completion of the first phase of a two-phase fact-finding effort.In August 2002, a team was formed, co-chaired by senior representatives from the NASA Office of Safety and Mission Assurance and the NAVSEA 92Q Submarine Safety and Quality Assurance Division. The team closely examined the two elements of submarine safety (SUBSAFE) certification: (1) new design/construction (initial certification) and (2) maintenance and modernization (sustaining certification), with a focus on: (1) Management and Organization, (2) Safety Requirements (technical and administrative), (3) Implementation Processes, (4) Compliance Verification Processes, and (5) Certification Processes.
Practice of Regulatory Science (Development of Medical Devices).
Niimi, Shingo
2017-01-01
Prototypes of medical devices are made in accordance with the needs of clinical practice, and for systems required during the initial process of medical device development for new surgical practices. Verification of whether these prototypes produce the intended performance specifications is conducted using basic tests such as mechanical and animal tests. The prototypes are then improved and modified until satisfactory results are obtained. After a prototype passes through a clinical trial process similar to that for new drugs, application for approval is made. In the approval application process, medical devices are divided into new, improved, and generic types. Reviewers judge the validity of intended use, indications, operation procedures, and precautions, and in addition evaluate the balance between risk and benefit in terms of efficacy and safety. Other characteristics of medical devices are the need for the user to attain proficiency in usage techniques to ensure efficacy and safety, and the existence of a variety of medical devices for which assessment strategies differ, including differences in impact on the body in cases in which a physical burden to the body or failure of a medical device develops. Regulatory science of medical devices involves prediction, judgment, and evaluation of efficacy, safety, and quality, from which data result which can become indices in the development stages from design to application for approval. A reduction in the number of animals used for testing, improvement in efficiency, reduction of the necessity for clinical trials, etc. are expected through rational setting of evaluation items.
Ada(R) Test and Verification System (ATVS)
NASA Technical Reports Server (NTRS)
Strelich, Tom
1986-01-01
The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.
The Environmental Technology Verification Program, established by the EPA, is designed to accelerate the development and commercialization of new or improved technologies through third-party verification and reporting of performance.
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
Inoue, Takao; Mukai, Kazuhiko
2017-01-18
Although all-solid-state lithium-ion batteries (ALIBs) have been believed as the ultimate safe battery, their true character has been an enigma so far. In this paper, we developed an all-inclusive-microcell (AIM) for differential scanning calorimetry (DSC) analysis to clarify the degree of safety (DOS) of ALIBs. Here AIM possesses all the battery components to work as a battery by itself, and DOS is determined by the total heat generation ratio (ΔH) of ALIB compared with the conventional LIB. When DOS = 100%, the safety of ALIB is exactly the same as that of LIB; when DOS = 0%, ALIB reaches the ultimate safety. We investigated two types of LIB-AIM and three types of ALIB-AIM. Surprisingly, all the ALIBs exhibit one or two exothermic peaks above 250 °C with 20-30% of DOS. The exothermic peak is attributed to the reaction between the released oxygen from the positive electrode and the Li metal in the negative electrode. Hence, ALIBs are found to be flammable as in the case of LIBs. We also attempted to improve the safety of ALIBs and succeeded in decreasing the DOS down to ∼16% by incorporating Ketjenblack into the positive electrode as an oxygen scavenger. Based on ΔH as a function of voltage window, a safety map for LIBs and ALIBs is proposed.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
Verification Testing of Air Pollution Control Technology Quality Management Plan Revision 2.3
The Air Pollution Control Technology Verification Center was established in 1995 as part of the EPA’s Environmental Technology Verification Program to accelerate the development and commercialization of improved environmental technologies’ performance.
ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM FOR MONITORING AND CHARACTERIZATION
The Environmental Technology Verification Program is a service of the Environmental Protection Agency designed to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of performance. The goal of ETV i...
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B.
1993-03-01
This report presents the results of an oversight assessment (OA) conducted by the US Department of Energy's (DOE) Office of Environment, Safety and Health (EH) of the operational readiness review (ORR) activities for the Cold Chemical Runs (CCRs) at the Defense Waste Processing Facility (DWPF) located at Savannah River Site (SRS). The EH OA of this facility took place concurrently with an ORR performed by the DOE Office of Environmental Restoration and Waste Management (EM). The EM ORR was conducted from September 28, 1992, through October 9, 1992, although portions of the EM ORR were extended beyond this period. Themore » EH OA evaluated the comprehensiveness and effectiveness of the EM ORR. The EH OA was designed to ascertain whether the EM ORR was thorough and demonstrated sufficient inquisitiveness to verify that the implementation of programs and procedures is adequate to assure the protection of worker safety and health. The EH OA was carried out in accordance with the protocol and procedures of the EH Program for Oversight Assessment of Operational Readiness Evaluations for Startups and Restarts,'' dated September 15, 1992. Based on its OA and verification of the resolution of EH OA findings, the EH OA Team believes that the startup of the CCRs may be safely begun, pending satisfactory completion and verification of the prestart findings identified by the EM ORR. The EH OA was based primarily on an evaluation of the comprehensiveness and effectiveness of the EM ORR and addressed the following areas: industrial safety, industrial hygiene, and respiratory protection; fire protection; and chemical safety. The EH OA conducted independent vertical-slice'' reviews to confirm EM ORR results in the areas of confined-space entry, respiratory protection, fire protection, and chemical safety.« less
Ares I-X Range Safety Simulation Verification and Analysis IV and V
NASA Technical Reports Server (NTRS)
Tarpley, Ashley; Beaty, James; Starr, Brett
2010-01-01
NASA s ARES I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. NASA generated a Range Safety (RS) flight data package to meet the RS trajectory data requirements defined in the Air Force Space Command Manual 91-710. Some products included in the flight data package were a nominal ascent trajectory, ascent flight envelope trajectories, and malfunction turn trajectories. These data are used by the Air Force s 45th Space Wing (45SW) to ensure Eastern Range public safety and to make flight termination decisions on launch day. Due to the criticality of the RS data in regards to public safety and mission success, an independent validation and verification (IV&V) effort was undertaken to accompany the data generation analyses to ensure utmost data quality and correct adherence to requirements. Multiple NASA centers and contractor organizations were assigned specific products to IV&V. The data generation and IV&V work was coordinated through the Launch Constellation Range Safety Panel s Trajectory Working Group, which included members from the prime and IV&V organizations as well as the 45SW. As a result of the IV&V efforts, the RS product package was delivered with confidence that two independent organizations using separate simulation software generated data to meet the range requirements and yielded similar results. This document captures ARES I-X RS product IV&V analysis, including the methodology used to verify inputs, simulation, and output data for an RS product. Additionally a discussion of lessons learned is presented to capture advantages and disadvantages to the IV&V processes used.
Static and Dynamic Verification of Critical Software for Space Applications
NASA Astrophysics Data System (ADS)
Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.
Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA, and the Xception tool for fault-injection. Keywords: Verification &Validation, RAMS, Onboard software, SFMEA, STA, Fault-injection 1 This work is being performed under the project STADY Applied Static And Dynamic Verification Of Critical Software, ESA/ESTEC Contract Nr. 15751/02/NL/LvH.
Testing of electrical equipment for a commercial grade dedication program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.L.; Srinivas, N.
1995-10-01
The availability of qualified safety related replacement parts for use in nuclear power plants has decreased over time. This has caused many nuclear power plants to purchase commercial grade items (CGI) and utilize the commercial grade dedication process to qualify the items for use in nuclear safety related applications. The laboratories of Technical and Engineering Services (the testing facility of Detroit Edison) have been providing testing services for verification of critical characteristics of these items. This paper presents an overview of the experience in testing electrical equipment with an emphasis on fuses.
Improvements and applications of COBRA-TF for stand-alone and coupled LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, M.; Cuervo, D.; Ivanov, K.
2006-07-01
The advanced thermal-hydraulic subchannel code COBRA-TF has been recently improved and applied for stand-alone and coupled LWR core calculations at the Pennsylvania State Univ. in cooperation with AREVA NP GmbH (Germany)) and the Technical Univ. of Madrid. To enable COBRA-TF for academic and industrial applications including safety margins evaluations and LWR core design analyses, the code programming, numerics, and basic models were revised and substantially improved. The code has undergone through an extensive validation, verification, and qualification program. (authors)
Ares I-X Range Safety Trajectory Analyses Overview and Independent Validation and Verification
NASA Technical Reports Server (NTRS)
Tarpley, Ashley F.; Starr, Brett R.; Tartabini, Paul V.; Craig, A. Scott; Merry, Carl M.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle
2011-01-01
All Flight Analysis data products were successfully generated and delivered to the 45SW in time to support the launch. The IV&V effort allowed data generators to work through issues early. Data consistency proved through the IV&V process provided confidence that the delivered data was of high quality. Flight plan approval was granted for the launch. The test flight was successful and had no safety related issues. The flight occurred within the predicted flight envelopes. Post flight reconstruction results verified the simulations accurately predicted the FTV trajectory.
Engineering of the LISA Pathfinder mission—making the experiment a practical reality
NASA Astrophysics Data System (ADS)
Warren, Carl; Dunbar, Neil; Backler, Mike
2009-05-01
LISA Pathfinder represents a unique challenge in the development of scientific spacecraft—not only is the LISA Test Package (LTP) payload a complex integrated development, placing stringent requirements on its developers and the spacecraft, but the payload also acts as the core sensor and actuator for the spacecraft, making the tasks of control design, software development and system verification unusually difficult. The micro-propulsion system which provides the remaining actuation also presents substantial development and verification challenges. As the mission approaches the system critical design review, flight hardware is completing verification and the process of verification using software and hardware simulators and test benches is underway. Preparation for operations has started, but critical milestones for LTP and field effect electric propulsion (FEEP) lie ahead. This paper summarizes the status of the present development and outlines the key challenges that must be overcome on the way to launch.
49 CFR 350.327 - How may States qualify for Incentive Funds?
Code of Federal Regulations, 2014 CFR
2014-10-01
... Incentive Funds? (a) A State may qualify for Incentive Funds if it can demonstrate that its CMV safety... recipients. (3) Upload of CMV accident reports in accordance with current FMCSA policy guidelines. (4) Verification of CDLs during all roadside inspections. (5) Upload of CMV inspection data in accordance with...
49 CFR 350.327 - How may States qualify for Incentive Funds?
Code of Federal Regulations, 2013 CFR
2013-10-01
... Incentive Funds? (a) A State may qualify for Incentive Funds if it can demonstrate that its CMV safety... recipients. (3) Upload of CMV accident reports in accordance with current FMCSA policy guidelines. (4) Verification of CDLs during all roadside inspections. (5) Upload of CMV inspection data in accordance with...
49 CFR 350.327 - How may States qualify for Incentive Funds?
Code of Federal Regulations, 2011 CFR
2011-10-01
... Incentive Funds? (a) A State may qualify for Incentive Funds if it can demonstrate that its CMV safety... recipients. (3) Upload of CMV accident reports in accordance with current FMCSA policy guidelines. (4) Verification of CDLs during all roadside inspections. (5) Upload of CMV inspection data in accordance with...
49 CFR 350.327 - How may States qualify for Incentive Funds?
Code of Federal Regulations, 2012 CFR
2012-10-01
... Incentive Funds? (a) A State may qualify for Incentive Funds if it can demonstrate that its CMV safety... recipients. (3) Upload of CMV accident reports in accordance with current FMCSA policy guidelines. (4) Verification of CDLs during all roadside inspections. (5) Upload of CMV inspection data in accordance with...
Code of Federal Regulations, 2010 CFR
2010-01-01
... risks to customers or to the safety and soundness of the financial institution or creditor from identity... unusual use of, or other suspicious activity related to, a covered account; and (5) Notice from customers... policies and procedures regarding identification and verification set forth in the Customer Identification...
Code of Federal Regulations, 2010 CFR
2010-01-01
... risks to customers or to the safety and soundness of the financial institution or creditor from identity... unusual use of, or other suspicious activity related to, a covered account; and (5) Notice from customers... policies and procedures regarding identification and verification set forth in the Customer Identification...
33 CFR 96.340 - Safety Management Certificate: what is it and when is it needed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... if it is a tanker, bulk freight vessel, freight vessel, or a self-propelled mobile offshore drilling... vessel, or a self-propelled mobile offshore drilling unit of 500 gross tons or more, when engaged on... audit; (2) A satisfactory intermediate verification audit requested by the vessel's responsible person...
USDA-ARS?s Scientific Manuscript database
The USDA Food Safety and Inspection Service requires samples of raw broiler parts for performance standard verification for the detection of Campylobacter. Poultry processors must maintain process controls with Campylobacter prevalence levels below 7.7%. Establishments utilize antimicrobial processi...
Code of Federal Regulations, 2010 CFR
2010-10-01
... which DOT agencies regulate your employees. (2) Your proposed written company policy concerning stand... temporary removal from performance of safety-sensitive functions becomes available, directly or indirectly... a covered employee will be subject to stand-down only with respect to the actual performance of...
A Study on Performance and Safety Tests of Electrosurgical Equipment.
Tavakoli Golpaygani, A; Movahedi, M M; Reza, M
2016-09-01
Modern medicine employs a wide variety of instruments with different physiological effects and measurements. Periodic verifications are routinely used in legal metrology for industrial measuring instruments. The correct operation of electrosurgical generators is essential to ensure patient's safety and management of the risks associated with the use of high and low frequency electrical currents on human body. The metrological reliability of 20 electrosurgical equipment in six hospitals (3 private and 3 public) was evaluated in one of the provinces of Iran according to international and national standards. The achieved results show that HF leakage current of ground-referenced generators are more than isolated generators and the power analysis of only eight units delivered acceptable output values and the precision in the output power measurements was low. Results indicate a need for new and severe regulations on periodic performance verifications and medical equipment quality control program especially in high risk instruments. It is also necessary to provide training courses for operating staff in the field of meterology in medicine to be acquianted with critical parameters to get accuracy results with operation room equipment.
NASA Technical Reports Server (NTRS)
Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael
2013-01-01
The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an anomalous condition, as occurred during the accident. Brahms-GUeM thus implicitly defines a class of scenarios, which include as an instance what occurred at Überlingen. Brahms-GUeM is a modeling framework enabling "what if" analysis of alternative work system configurations and thus facilitating design of alternative operations concepts. It enables subsequent adaption (reusing simulation components) for modeling and simulating NextGen scenarios. This project demonstrates that BRAHMS provides the capacity to model the complexity of air transportation systems, going beyond idealized and simple flights to include for example the interaction of pilots and ATCOs. The research shows clearly that verification and validation must include the entire work system, on the one hand to check that mechanisms exist to handle failures of communication and alerting subsystems and/or failures of people to notice, comprehend, or communicate problematic (unsafe) situations; but also to understand how people must use their own judgment in relating fallible systems like TCAS to other sources of information and thus to evaluate how the unreliability of automation affects system safety. The simulation shows in particular that distributed agents (people and automated systems) acting without knowledge of each others' actions can create a complex, dynamic system whose interactive behavior is unexpected and is changing too quickly to comprehend and control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matloch, L.; Vaccaro, S.; Couland, M.
The back end of the nuclear fuel cycle continues to develop. The European Commission, particularly the Nuclear Safeguards Directorate of the Directorate General for Energy, implements Euratom safeguards and needs to adapt to this situation. The verification methods for spent nuclear fuel, which EURATOM inspectors can use, require continuous improvement. Whereas the Euratom on-site laboratories provide accurate verification results for fuel undergoing reprocessing, the situation is different for spent fuel which is destined for final storage. In particular, new needs arise from the increasing number of cask loadings for interim dry storage and the advanced plans for the construction ofmore » encapsulation plants and geological repositories. Various scenarios present verification challenges. In this context, EURATOM Safeguards, often in cooperation with other stakeholders, is committed to further improvement of NDA methods for spent fuel verification. In this effort EURATOM plays various roles, ranging from definition of inspection needs to direct participation in development of measurement systems, including support of research in the framework of international agreements and via the EC Support Program to the IAEA. This paper presents recent progress in selected NDA methods. These methods have been conceived to satisfy different spent fuel verification needs, ranging from attribute testing to pin-level partial defect verification. (authors)« less
Fluor Daniel Hanford Inc. integrated safety management system phase 1 verification final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
PARSONS, J.E.
1999-10-28
The purpose of this review is to verify the adequacy of documentation as submitted to the Approval Authority by Fluor Daniel Hanford, Inc. (FDH). This review is not only a review of the Integrated Safety Management System (ISMS) System Description documentation, but is also a review of the procedures, policies, and manuals of practice used to implement safety management in an environment of organizational restructuring. The FDH ISMS should support the Hanford Strategic Plan (DOE-RL 1996) to safely clean up and manage the site's legacy waste; deploy science and technology while incorporating the ISMS theme to ''Do work safely''; andmore » protect human health and the environment.« less
Software development for airborne radar
NASA Astrophysics Data System (ADS)
Sundstrom, Ingvar G.
Some aspects for development of software in a modern multimode airborne nose radar are described. First, an overview of where software is used in the radar units is presented. The development phases-system design, functional design, detailed design, function verification, and system verification-are then used as the starting point for the discussion. Methods, tools, and the most important documents are described. The importance of video flight recording in the early stages and use of a digital signal generators for performance verification is emphasized. Some future trends are discussed.
ENVIRONMENTAL TECHNOLOGY VERIFICATION FOR INDOOR AIR PRODUCTS
The paper discusses environmental technology verification (ETV) for indoor air products. RTI is developing the framework for a verification testing program for indoor air products, as part of EPA's ETV program. RTI is establishing test protocols for products that fit into three...
ENVIRONMENTAL TECHNOLOGY VERIFICATION AND INDOOR AIR
The paper discusses environmental technology verification and indoor air. RTI has responsibility for a pilot program for indoor air products as part of the U.S. EPA's Environmental Technology Verification (ETV) program. The program objective is to further the development of sel...
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hawk, J. D.
1975-01-01
A generalized concept for cost-effective structural design is introduced. It is assumed that decisions affecting the cost effectiveness of aerospace structures fall into three basic categories: design, verification, and operation. Within these basic categories, certain decisions concerning items such as design configuration, safety factors, testing methods, and operational constraints are to be made. All or some of the variables affecting these decisions may be treated probabilistically. Bayesian statistical decision theory is used as the tool for determining the cost optimum decisions. A special case of the general problem is derived herein, and some very useful parametric curves are developed and applied to several sample structures.
Composite Overwrapped Pressure Vessel (COPV) Stress Rupture Testing
NASA Technical Reports Server (NTRS)
Greene, Nathanael J.; Saulsberry, Regor L.; Leifeste, Mark R.; Yoder, Tommy B.; Keddy, Chris P.; Forth, Scott C.; Russell, Rick W.
2010-01-01
This paper reports stress rupture testing of Kevlar(TradeMark) composite overwrapped pressure vessels (COPVs) at NASA White Sands Test Facility. This 6-year test program was part of the larger effort to predict and extend the lifetime of flight vessels. Tests were performed to characterize control parameters for stress rupture testing, and vessel life was predicted by statistical modeling. One highly instrumented 102-cm (40-in.) diameter Kevlar(TradeMark) COPV was tested to failure (burst) as a single-point model verification. Significant data were generated that will enhance development of improved NDE methods and predictive modeling techniques, and thus better address stress rupture and other composite durability concerns that affect pressure vessel safety, reliability and mission assurance.
Holmes, Robert R.; Singh, Vijay P.
2016-01-01
The importance of streamflow data to the world’s economy, environmental health, and public safety continues to grow as the population increases. The collection of streamflow data is often an involved and complicated process. The quality of streamflow data hinges on such things as site selection, instrumentation selection, streamgage maintenance and quality assurance, proper discharge measurement techniques, and the development and continued verification of the streamflow rating. This chapter serves only as an overview of the streamflow data collection process as proper treatment of considerations, techniques, and quality assurance cannot be addressed adequately in the space limitations of this chapter. Readers with the need for the detailed information on the streamflow data collection process are referred to the many references noted in this chapter.
Materials Safety - Not just Flammability and Toxic Offgassing
NASA Technical Reports Server (NTRS)
Pedley, Michael D.
2007-01-01
For many years, the safety community has focused on a limited subset of materials and processes requirements as key to safety: Materials flammability, Toxic offgassing, Propellant compatibility, Oxygen compatibility, and Stress-corrosion cracking. All these items are important, but the exclusive focus on these items neglects many other items that are equally important to materials safety. Examples include (but are not limited to): 1. Materials process control -- proper qualification and execution of manufacturing processes such as structural adhesive bonding, welding, and forging are crucial to materials safety. Limitation of discussions on materials process control to an arbitrary subset of processes, known as "critical processes" is a mistake, because any process where the quality of the product cannot be verified by inspection can potentially result in unsafe hardware 2 Materials structural design allowables -- development of valid design allowables when none exist in the literature requires extensive testing of multiple lots of materials and is extremely expensive. But, without valid allowables, structural analysis cannot verify structural safety 3. Corrosion control -- All forms of corrosion, not just stress corrosion, can affect structural integrity of hardware 4. Contamination control during ground processing -- contamination control is critical to manufacturing processes such as adhesive bonding and also to elimination foreign objects and debris (FOD) that are hazardous to the crew of manned spacecraft in microgravity environments. 5. Fasteners -- Fastener design, the use of verifiable secondary locking features, and proper verification of fastener torque are essential for proper structural performance This presentation discusses some of these key factors and the importance of considering them in ensuring the safety of space hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.
Model-Driven Safety Analysis of Closed-Loop Medical Systems
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2013-01-01
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure. PMID:24177176
Model-Driven Safety Analysis of Closed-Loop Medical Systems.
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2012-10-26
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure.
Design of Low Complexity Model Reference Adaptive Controllers
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan
2012-01-01
Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.
24 CFR 5.659 - Family information and verification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Family information and verification... Assisted Housing Serving Persons with Disabilities: Family Income and Family Payment; Occupancy... § 5.659 Family information and verification. (a) Applicability. This section states requirements for...
This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...
24 CFR 5.659 - Family information and verification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Family information and verification... Assisted Housing Serving Persons with Disabilities: Family Income and Family Payment; Occupancy... § 5.659 Family information and verification. (a) Applicability. This section states requirements for...
BAGHOUSE FILTRATION PRODUCTS VERIFICATION TESTING, HOW IT BENEFITS THE BOILER BAGHOUSE OPERATOR
The paper describes the Environmental Technology Verification (ETV) Program for baghouse filtration products developed by the Air Pollution Control Technology Verification Center, one of six Centers under the ETV Program, and discusses how it benefits boiler baghouse operators. A...
Upgrades at the NASA Langley Research Center National Transonic Facility
NASA Technical Reports Server (NTRS)
Paryz, Roman W.
2012-01-01
Several projects have been completed or are nearing completion at the NASA Langley Research Center (LaRC) National Transonic Facility (NTF). The addition of a Model Flow-Control/Propulsion Simulation test capability to the NTF provides a unique, transonic, high-Reynolds number test capability that is well suited for research in propulsion airframe integration studies, circulation control high-lift concepts, powered lift, and cruise separation flow control. A 1992 vintage Facility Automation System (FAS) that performs the control functions for tunnel pressure, temperature, Mach number, model position, safety interlock and supervisory controls was replaced using current, commercially available components. This FAS upgrade also involved a design study for the replacement of the facility Mach measurement system and the development of a software-based simulation model of NTF processes and control systems. The FAS upgrades were validated by a post upgrade verification wind tunnel test. The data acquisition system (DAS) upgrade project involves the design, purchase, build, integration, installation and verification of a new DAS by replacing several early 1990's vintage computer systems with state of the art hardware/software. This paper provides an update on the progress made in these efforts. See reference 1.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco
2004-01-01
The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce aviation accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems. Attempts to verify RSM with NuSMV and SPIN have failed due to excessive memory consumption.
Nordbeck, Peter; Fidler, Florian; Friedrich, Michael T; Weiss, Ingo; Warmuth, Marcus; Gensler, Daniel; Herold, Volker; Geistert, Wolfgang; Jakob, Peter M; Ertl, Georg; Ritter, Oliver; Ladd, Mark E; Bauer, Wolfgang R; Quick, Harald H
2012-12-01
There are serious concerns regarding safety when performing magnetic resonance imaging in patients with implanted conductive medical devices, such as cardiac pacemakers, and associated leads, as severe incidents have occurred in the past. In this study, several approaches for altering an implant's lead design were systematically developed and evaluated to enhance the safety of implanted medical devices in a magnetic resonance imaging environment. The individual impact of each design change on radiofrequency heating was then systematically investigated in functional lead prototypes at 1.5 T. Radiofrequency-induced heating could be successfully reduced by three basic changes in conventional pacemaker lead design: (1) increasing the lead tip area, (2) increasing the lead conductor resistance, and (3) increasing outer lead insulation conductivity. The findings show that radiofrequency energy pickup in magnetic resonance imaging can be reduced and, therefore, patient safety can be improved with dedicated construction changes according to a "safe by design" strategy. Incorporation of the described alterations into implantable medical devices such as pacemaker leads can be used to help achieve favorable risk-benefit-ratios when performing magnetic resonance imaging in the respective patient group. Copyright © 2012 Wiley Periodicals, Inc.
Influence of Steering Control Devices Mounted in Cars for the Disabled on Passive Safety
NASA Astrophysics Data System (ADS)
Masiá, J.; Eixerés, B.; Dols, J. F.; Colomina, F. J.
2009-11-01
The purpose of this research is to analyze the influence of steering control devices for disabled people on passive safety. It is based on the advances made in the modelling and simulation of the driver position and in the suit verification test. The influence of these devices is studied through airbag deployment and/or its influence on driver safety. We characterize the different adaptations that are used in adapted cars that can be found mounted in vehicles in order to generating models that are verified by experimental test. A three dimensional design software package was used to develop the model. The simulations were generated using a dynamic simulation program employing LSDYNA finite elements. This program plots the geometry and assigns materials. The airbag is shaped, meshed and folded just as it is mounted in current vehicles. The thermodynamic model of expansion of gases is assigned and the contact interfaces are defined. Static tests were carried out on deployment of the airbag to contrast with and to validate the computational models and to measure the behaviour of the airbag when there are steering adaptations mounted in the vehicle.
Full power level development of the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Johnson, J. R.; Colbo, H. I.
1982-01-01
Development of the Space Shuttle main engine for nominal operation at full power level (109 percent rated power) is continuing in parallel with the successful flight testing of the Space Transportation System. Verification of changes made to the rated power level configuration currently being flown on the Orbiter Columbia is in progress and the certification testing of the full power level configuration has begun. The certification test plan includes the accumulation of 10,000 seconds on each of two engines by early 1983. Certification testing includes the simulation of nominal mission duty cycles as well as the two abort thrust profiles: abort to orbit and return to launch site. Several of the certification tests are conducted at 111 percent power to demonstrate additional safety margins. In addition to the flight test and development program results, future plans for life demonstration and engine uprating will be discussed.
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
The Environmental Technology Verification (ETV) Program, established by the U.S. EPA, is designed to accelerate the development and commercialization of new or improved technologies through third-party verification and reporting of performance. The Air Pollution Control Technolog...
The U.S. EPA's Office of Research and Development operates the Environmental Technology Verification (ETV) program to facilitate the deployment of innovative technologies through performance verification and information dissemination. Congress funds ETV in response to the belief ...
24 CFR 4001.112 - Income verification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Income verification. 4001.112... Requirements and Underwriting Procedures § 4001.112 Income verification. The mortgagee shall use FHA's procedures to verify the mortgagor's income and shall comply with the following additional requirements: (a...
Johnsen, Stig O; Kilskar, Stine Skaufel; Fossum, Knut Robert
2017-01-01
More attention has recently been given to Human Factors in petroleum accident investigations. The Human Factors areas examined in this article are organizational, cognitive and physical ergonomics. A key question to be explored is as follows: To what degree are the petroleum industry and safety authorities in Norway focusing on these Human Factors areas from the design phase? To investigate this, we conducted an innovative exploratory study of the development of four control centres in Norwegian oil and gas industry in collaboration between users, management and Human Factors experts. We also performed a literature survey and discussion with the professional Human Factors network in Norway. We investigated the Human Factors focus, reasons for not considering Human Factors and consequences of missing Human Factors in safety management. The results revealed an immature focus and organization of Human Factors. Expertise on organizational ergonomics and cognitive ergonomics are missing from companies and safety authorities and are poorly prioritized during the development. The easy observable part of Human Factors (i.e. physical ergonomics) is often in focus. Poor focus on Human Factors in the design process creates demanding conditions for human operators and impact safety and resilience. There is lack of non-technical skills such as communication and decision-making. New technical equipment such as Closed Circuit Television is implemented without appropriate use of Human Factors standards. Human Factors expertise should be involved as early as possible in the responsible organizations. Verification and validation of Human Factors should be improved and performed from the start, by certified Human Factors experts in collaboration with the workforce. The authorities should check-back that the regulatory framework of Human Factors is communicated, understood and followed. PMID:29278242
NASA Technical Reports Server (NTRS)
Atwell, William; Koontz, Steve; Normand, Eugene
2012-01-01
In this paper we review the discovery of cosmic ray effects on the performance and reliability of microelectronic systems as well as on human health and safety, as well as the development of the engineering and health science tools used to evaluate and mitigate cosmic ray effects in earth surface, atmospheric flight, and space flight environments. Three twentieth century technological developments, 1) high altitude commercial and military aircraft; 2) manned and unmanned spacecraft; and 3) increasingly complex and sensitive solid state micro-electronics systems, have driven an ongoing evolution of basic cosmic ray science into a set of practical engineering tools (e.g. ground based test methods as well as high energy particle transport and reaction codes) needed to design, test, and verify the safety and reliability of modern complex electronic systems as well as effects on human health and safety. The effects of primary cosmic ray particles, and secondary particle showers produced by nuclear reactions with spacecraft materials, can determine the design and verification processes (as well as the total dollar cost) for manned and unmanned spacecraft avionics systems. Similar considerations apply to commercial and military aircraft operating at high latitudes and altitudes near the atmospheric Pfotzer maximum. Even ground based computational and controls systems can be negatively affected by secondary particle showers at the Earth's surface, especially if the net target area of the sensitive electronic system components is large. Accumulation of both primary cosmic ray and secondary cosmic ray induced particle shower radiation dose is an important health and safety consideration for commercial or military air crews operating at high altitude/latitude and is also one of the most important factors presently limiting manned space flight operations beyond low-Earth orbit (LEO).
NASA Astrophysics Data System (ADS)
Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan
2018-02-01
The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.
Monitoring the Performance of a Neuro-Adaptive Controller
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gupta, Pramod
2004-01-01
Traditional control has proven to be ineffective to deal with catastrophic changes or slow degradation of complex, highly nonlinear systems like aircraft or spacecraft, robotics, or flexible manufacturing systems. Control systems which can adapt toward changes in the plant have been proposed as they offer many advantages (e.g., better performance, controllability of aircraft despite of a damaged wing). In the last few years, use of neural networks in adaptive controllers (neuro-adaptive control) has been studied actively. Neural networks of various architectures have been used successfully for online learning adaptive controllers. In such a typical control architecture, the neural network receives as an input the current deviation between desired and actual plant behavior and, by on-line training, tries to minimize this discrepancy (e.g.; by producing a control augmentation signal). Even though neuro-adaptive controllers offer many advantages, they have not been used in mission- or safety-critical applications, because performance and safety guarantees cannot b e provided at development time-a major prerequisite for safety certification (e.g., by the FAA or NASA). Verification and Validation (V&V) of an adaptive controller requires the development of new analysis techniques which can demonstrate that the control system behaves safely under all operating conditions. Because of the requirement to adapt toward unforeseen changes during operation, i.e., in real time, design-time V&V is not sufficient.
Cosmic Ray Muon Imaging of Spent Nuclear Fuel in Dry Storage Casks
Durham, J. Matthew; Guardincerri, Elena; Morris, Christopher L.; ...
2016-04-29
In this paper, cosmic ray muon radiography has been used to identify the absence of spent nuclear fuel bundles inside a sealed dry storage cask. The large amounts of shielding that dry storage casks use to contain radiation from the highly radioactive contents impedes typical imaging methods, but the penetrating nature of cosmic ray muons allows them to be used as an effective radiographic probe. This technique was able to successfully identify missing fuel bundles inside a sealed Westinghouse MC-10 cask. This method of fuel cask verification may prove useful for international nuclear safeguards inspectors. Finally, muon radiography may findmore » other safety and security or safeguards applications, such as arms control verification.« less
On the Formal Verification of Conflict Detection Algorithms
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Butler, Ricky W.; Carreno, Victor A.; Dowek, Gilles
2001-01-01
Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach, based on formal verification, for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, which is a continuous path in the x-y plane constrained by physical laws and operational requirements. From the Model of trajectories, we extract, and formally prove, high level properties that can serve as a framework to analyze conflict scenarios. We use the Airborne Information for Lateral Spacing (AILS) alerting algorithm as a case study of our approach.
Cutting More than Metal: Breaking the Development Cycle
NASA Technical Reports Server (NTRS)
Singer, Chris
2014-01-01
New technology is changing the way we do business at NASA. The ability to use these new tools is made possible by a learning culture able to embrace innovation, flexibility, and prudent risk tolerance, while retaining the hard-won lessons learned of other successes and failures. Technologies such as 3-D manufacturing and structured light scanning are re-shaping the entire product life cycle, from design and analysis, through production, verification, logistics and operations. New fabrication techniques, verification techniques, integrated analysis, and models that follow the hardware from initial concept through operation are reducing the cost and time of building space hardware. Using these technologies to be more efficient, reliable and affordable requires we bring them to a level safe for NASA systems, maintain appropriate rigor in testing and acceptance, and transition new technology. Maximizing these technologies also requires cultural acceptance and understanding and balancing rules with creativity. Evolved systems engineering processes at NASA are increasingly more flexible than they have been in the past, enabling the implementation of new techniques and approaches. This paper provides an overview of NASA Marshall Space Flight Center's new approach to development, as well as examples of how that approach has been incorporated into NASA's Space Launch System (SLS) Program, which counts among its key tenants - safety, affordability, and sustainability. One of the 3D technologies that will be discussed in this paper is the design and testing of various rocket engine components.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... according to the design. The third- subsea function and pressure tests party verification must include...; Requires new casing and cementing integrity tests; Establishes new requirements for subsea secondary BOP... that, for the final casing string (or liner if it is the final string), an operator must install one...
Code of Federal Regulations, 2010 CFR
2010-01-01
... control reasonably foreseeable risks to customers or to the safety and soundness of the financial...; and (5) Notice from customers, victims of identity theft, law enforcement authorities, or other... verification set forth in the Customer Identification Program rules implementing 31 U.S.C. 5318(l)(31 CFR 103...
33 CFR 96.330 - Document of Compliance certificate: what is it and when is it needed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... freight vessel, freight vessel, or a self-propelled mobile offshore drilling unit of 500 gross tons or... 12 passengers or a tanker, bulk freight vessel, freight vessel, or a self-propelled mobile offshore... by an authorized organization acting on behalf of the U.S. through a safety management verification...
Verification of Cold Working and Interference Levels at Fastener Holes
2009-02-01
of the Residual Stress Field on the Fatigue Coupons ........................................ 32 3.3.3 Fractography of Fatigue Test Coupons...predictions to fatigue experiment results (none of the literature we reviewed described fractography of cracks propagating through residual stress...ensures continued safety, readiness, and controlled maintenance costs. These methods augment and enhance traditional safe-life and damage tolerance
Code of Federal Regulations, 2010 CFR
2010-07-01
... safety deposit box or other safekeeping services, or cash management, custodian, and trust services. (ii... documents, non-documentary methods, or a combination of both methods as described in this paragraph (b)(2... agreement, or trust instrument. (B) Verification through non-documentary methods. For a bank relying on non...
Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)
2003-01-01
Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.
Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
Bass, Ellen J.
2011-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930
Security Verification Techniques Applied to PatchLink COTS Software
NASA Technical Reports Server (NTRS)
Gilliam, David P.; Powell, John D.; Bishop, Matt; Andrew, Chris; Jog, Sameer
2006-01-01
Verification of the security of software artifacts is a challenging task. An integrated approach that combines verification techniques can increase the confidence in the security of software artifacts. Such an approach has been developed by the Jet Propulsion Laboratory (JPL) and the University of California at Davis (UC Davis). Two security verification instruments were developed and then piloted on PatchLink's UNIX Agent, a Commercial-Off-The-Shelf (COTS) software product, to assess the value of the instruments and the approach. The two instruments are the Flexible Modeling Framework (FMF) -- a model-based verification instrument (JPL), and a Property-Based Tester (UC Davis). Security properties were formally specified for the COTS artifact and then verified using these instruments. The results were then reviewed to determine the effectiveness of the approach and the security of the COTS product.
24 CFR 5.216 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Social Security and Employer Identification Numbers. 5.216 Section 5.216 Housing and Urban Development...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers; Procedures for Obtaining Income Information Disclosure and Verification of Social Security Numbers and...
24 CFR 5.216 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Social Security and Employer Identification Numbers. 5.216 Section 5.216 Housing and Urban Development...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers; Procedures for Obtaining Income Information Disclosure and Verification of Social Security Numbers and...
24 CFR 5.216 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Social Security and Employer Identification Numbers. 5.216 Section 5.216 Housing and Urban Development...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers; Procedures for Obtaining Income Information Disclosure and Verification of Social Security Numbers and...
24 CFR 5.216 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Social Security and Employer Identification Numbers. 5.216 Section 5.216 Housing and Urban Development...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers; Procedures for Obtaining Income Information Disclosure and Verification of Social Security Numbers and...
24 CFR 5.216 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Social Security and Employer Identification Numbers. 5.216 Section 5.216 Housing and Urban Development...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers; Procedures for Obtaining Income Information Disclosure and Verification of Social Security Numbers and...
Control of Suspect/Counterfeit and Defective Items
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheriff, Marnelle L.
2013-09-03
This procedure implements portions of the requirements of MSC-MP-599, Quality Assurance Program Description. It establishes the Mission Support Alliance (MSA) practices for minimizing the introduction of and identifying, documenting, dispositioning, reporting, controlling, and disposing of suspect/counterfeit and defective items (S/CIs). employees whose work scope relates to Safety Systems (i.e., Safety Class [SC] or Safety Significant [SS] items), non-safety systems and other applications (i.e., General Service [GS]) where engineering has determined that their use could result in a potential safety hazard. MSA implements an effective Quality Assurance (QA) Program providing a comprehensive network of controls and verification providing defense-in-depth by preventingmore » the introduction of S/CIs through the design, procurement, construction, operation, maintenance, and modification of processes. This procedure focuses on those safety systems, and other systems, including critical load paths of lifting equipment, where the introduction of S/CIs would have the greatest potential for creating unsafe conditions.« less
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Penix, John; Markosian, Lawrence Z.; OMalley, Owen; Brew, William A.
2003-01-01
Attempts to achieve widespread use of software verification tools have been notably unsuccessful. Even 'straightforward', classic, and potentially effective verification tools such as lint-like tools face limits on their acceptance. These limits are imposed by the expertise required applying the tools and interpreting the results, the high false positive rate of many verification tools, and the need to integrate the tools into development environments. The barriers are even greater for more complex advanced technologies such as model checking. Web-hosted services for advanced verification technologies may mitigate these problems by centralizing tool expertise. The possible benefits of this approach include eliminating the need for software developer expertise in tool application and results filtering, and improving integration with other development tools.
TET-1- A German Microsatellite for Technology On -Orbit Verification
NASA Astrophysics Data System (ADS)
Föckersperger, S.; Lattner, K.; Kaiser, C.; Eckert, S.; Bärwald, W.; Ritzmann, S.; Mühlbauer, P.; Turk, M.; Willemsen, P.
2008-08-01
Due to the high safety standards in the space industry every new product must go through a verification process before qualifying for operation in a space system. Within the verification process the payload undergoes a series of tests which prove that it is in accordance with mission requirements in terms of function, reliability and safety. Important verification components are the qualification for use on the ground as well as the On-Orbit Verification (OOV), i.e. proof that the product is suitable for use under virtual space conditions (on-orbit). Here it is demonstrated that the product functions under conditions which cannot or can only be partially simulated on the ground. The OOV-Program of the DLR serves to bridge the gap between the product tested and qualified on the ground and the utilization of the product in space. Due to regular and short-term availability of flight opportunities industry and research facilities can verify their latest products under space conditions and demonstrate their reliability and marketability. The Technologie-Erprobungs-Tr&äger TET (Technology Experiments Carrier) comprises the core elements of the OOV Program. A programmatic requirement of the OOV Program is that a satellite bus already verified in orbit be used in the first segment of the program. An analysis of suitable satellite buses showed that a realization of the TET satellite bus based on the BIRD satellite bus fulfilled the programmatic requirements best. Kayser-Threde was selected by DLR as Prime Contractor to perform the project together with its major subcontractors Astro- und Feinwerktechnik, Berlin for the platform development and DLR-GSOC for the ground segment development. TET is now designed to be a modular and flexible micro-satellite for any orbit between 450 and 850 km altitude and inclination between 53° and SSO. With an overall mass of 120 kg TET is able to accommodate experiments of up to 50 kg. A multipurpose payload supply systemThere is significant confusion in the space industry today over the terms used to describe satellite bus architectures. Terms such as "standard bus" (or "common bus"), "modular bus" and "plug-and-play bus" are often used with little understanding of what the terms actually mean, and even less understanding of what the differences in these space architectures mean. It may seem that these terms are subtle differentiators, but in reality these terms describe radically different ways to design, build, test, and operate satellites. Furthermore, these terms imply very different business models for the acquisition, operation, and sustainment of space systems. This paper will define and describe the difference between "standard buses", "modular buses" and "plug-and-play buses"; giving examples of each kind with a cost/benefit discussion of each type. under Kayser-Threde responsibility provides the necessary interfaces to the experiments. The first TET mission is scheduled for mid of 2010. TET will be launched as piggy-back payload on any available launcher worldwide to reduce launch cost and provide maximum flexibility. Finally, TET will provide all services required by the experimenters for a one year mission operation to perform a successful OOV-mission with its technology experiments leading to an efficient access to space for German industry and institutions.
Turbokon scientific and production implementation company—25 years of activity
NASA Astrophysics Data System (ADS)
Favorskii, O. N.; Leont'ev, A. I.; Milman, O. O.
2016-05-01
The main results of studies performed at ZAO Turbokon NPVP in cooperation with leading Russian scientific organizations during 25 years of its activity in the field of development of unique ecologically clean electric power and heat production technologies are described. They include the development and experimental verification using prototypes and full-scale models of highly efficient air-cooled condensers for steam turbines, a high temperature gas steam turbine for stationary and transport power engineering, a nonfuel technology of electric power production using steam turbine installations with a unit power of 4-20 MW at gas-main pipelines and industrial boiler houses and heat stations. The results of efforts in the field of reducing vibroactivity of power equipment for transport installations are given. Basic directions of further research for increasing the efficiency and ecological safety of home power engineering are discussed.
24 CFR 203.35 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Social Security and Employer Identification Numbers. 203.35 Section 203.35 Housing and Urban Development... Requirements and Underwriting Procedures Eligible Mortgagors § 203.35 Disclosure and verification of Social... mortgagor must meet the requirements for the disclosure and verification of Social Security and Employer...
24 CFR 206.40 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Social Security and Employer Identification Numbers. 206.40 Section 206.40 Housing and Urban Development... Eligibility; Endorsement Eligible Mortgagors § 206.40 Disclosure and verification of Social Security and... verification of Social Security and Employer Identification Numbers, as provided by part 200, subpart U, of...
24 CFR 201.6 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Social Security and Employer Identification Numbers. 201.6 Section 201.6 Housing and Urban Development... HOME LOANS General § 201.6 Disclosure and verification of Social Security and Employer Identification... the disclosure and verification of Social Security and Employer Identification Numbers, as provided by...
The U.S. EPA's Office of Research and Development operates the Environmental Technology Verification (ETV) program to facilitate the deployment of innovative technologies through performance verification and information dissemination. Congress funds ETV in response to the belief ...
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
Joint ETV/NOWATECH test plan for the Sorbisense GSW40 passive sampler
The joint test plan is the implementation of a test design developed for verification of the performance of an environmental technology following the NOWATECH ETV method. The verification is a joint verification with the US EPA ETV scheme and the Advanced Monitoring Systems Cent...
24 CFR 206.40 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Social Security and Employer Identification Numbers. 206.40 Section 206.40 Housing and Urban Development... Eligibility; Endorsement Eligible Mortgagors § 206.40 Disclosure and verification of Social Security and... verification of Social Security and Employer Identification Numbers, as provided by part 200, subpart U, of...
24 CFR 203.35 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Social Security and Employer Identification Numbers. 203.35 Section 203.35 Housing and Urban Development... Requirements and Underwriting Procedures Eligible Mortgagors § 203.35 Disclosure and verification of Social... mortgagor must meet the requirements for the disclosure and verification of Social Security and Employer...
24 CFR 206.40 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Social Security and Employer Identification Numbers. 206.40 Section 206.40 Housing and Urban Development... Eligibility; Endorsement Eligible Mortgagors § 206.40 Disclosure and verification of Social Security and... verification of Social Security and Employer Identification Numbers, as provided by part 200, subpart U, of...
24 CFR 201.6 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Social Security and Employer Identification Numbers. 201.6 Section 201.6 Housing and Urban Development... HOME LOANS General § 201.6 Disclosure and verification of Social Security and Employer Identification... the disclosure and verification of Social Security and Employer Identification Numbers, as provided by...
24 CFR 206.40 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Social Security and Employer Identification Numbers. 206.40 Section 206.40 Housing and Urban Development... Eligibility; Endorsement Eligible Mortgagors § 206.40 Disclosure and verification of Social Security and... verification of Social Security and Employer Identification Numbers, as provided by part 200, subpart U, of...
24 CFR 203.35 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Social Security and Employer Identification Numbers. 203.35 Section 203.35 Housing and Urban Development... Requirements and Underwriting Procedures Eligible Mortgagors § 203.35 Disclosure and verification of Social... mortgagor must meet the requirements for the disclosure and verification of Social Security and Employer...
24 CFR 203.35 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Social Security and Employer Identification Numbers. 203.35 Section 203.35 Housing and Urban Development... Requirements and Underwriting Procedures Eligible Mortgagors § 203.35 Disclosure and verification of Social... mortgagor must meet the requirements for the disclosure and verification of Social Security and Employer...
24 CFR 201.6 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Social Security and Employer Identification Numbers. 201.6 Section 201.6 Housing and Urban Development... HOME LOANS General § 201.6 Disclosure and verification of Social Security and Employer Identification... the disclosure and verification of Social Security and Employer Identification Numbers, as provided by...
24 CFR 201.6 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Social Security and Employer Identification Numbers. 201.6 Section 201.6 Housing and Urban Development... HOME LOANS General § 201.6 Disclosure and verification of Social Security and Employer Identification... the disclosure and verification of Social Security and Employer Identification Numbers, as provided by...
24 CFR 206.40 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Social Security and Employer Identification Numbers. 206.40 Section 206.40 Housing and Urban Development... Eligibility; Endorsement Eligible Mortgagors § 206.40 Disclosure and verification of Social Security and... verification of Social Security and Employer Identification Numbers, as provided by part 200, subpart U, of...
24 CFR 203.35 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Social Security and Employer Identification Numbers. 203.35 Section 203.35 Housing and Urban Development... Requirements and Underwriting Procedures Eligible Mortgagors § 203.35 Disclosure and verification of Social... mortgagor must meet the requirements for the disclosure and verification of Social Security and Employer...
24 CFR 201.6 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Social Security and Employer Identification Numbers. 201.6 Section 201.6 Housing and Urban Development... HOME LOANS General § 201.6 Disclosure and verification of Social Security and Employer Identification... the disclosure and verification of Social Security and Employer Identification Numbers, as provided by...
Integrated Safety Analysis Tiers
NASA Technical Reports Server (NTRS)
Shackelford, Carla; McNairy, Lisa; Wetherholt, Jon
2009-01-01
Commercial partnerships and organizational constraints, combined with complex systems, may lead to division of hazard analysis across organizations. This division could cause important hazards to be overlooked, causes to be missed, controls for a hazard to be incomplete, or verifications to be inefficient. Each organization s team must understand at least one level beyond the interface sufficiently enough to comprehend integrated hazards. This paper will discuss various ways to properly divide analysis among organizations. The Ares I launch vehicle integrated safety analyses effort will be utilized to illustrate an approach that addresses the key issues and concerns arising from multiple analysis responsibilities.
Petschonek, Sarah; Burlison, Jonathan; Cross, Carl; Martin, Kathy; Laver, Joseph; Landis, Ronald S; Hoffman, James M
2013-12-01
Given the growing support for establishing a just patient safety culture in health-care settings, a valid tool is needed to assess and improve just patient safety culture. The purpose of this study was to develop a measure of individual perceptions of just culture for a hospital setting. The 27-item survey was administered to 998 members of a health-care staff in a pediatric research hospital as part of the hospital's ongoing patient safety culture assessment process. Subscales included balancing a blame-free approach with accountability, feedback and communication, openness of communication, quality of the event reporting process, continuous improvement, and trust. The final sample of 404 participants (40% response rate) included nurses, physicians, pharmacists, and other hospital staff members involved in patient care. Confirmatory factor analysis was used to test the internal structure of the measure and reliability analyses were conducted on the subscales. Moderate support for the factor structure was established with confirmatory factor analysis. After modifications were made to improve statistical fit, the final version of the measure included 6 subscales loading onto one higher-order dimension. Additionally, Cronbach α reliability scores for the subscales were positive, with each dimension being above 0.7 with the exception of one. The instrument designed and tested in this study demonstrated adequate structure and reliability. Given the uniqueness of the current sample, further verification of the JCAT is needed from hospitals that serve broader populations. A validated tool could also be used to evaluate the relation between just culture and patient safety outcomes.
Petschonek, Sarah; Burlison, Jonathan; Cross, Carl; Martin, Kathy; Laver, Joseph; Landis, Ronald S.; Hoffman, James M.
2014-01-01
Objectives Given the growing support for establishing a just patient safety culture in healthcare settings, a valid tool is needed to assess and improve just patient safety culture. The purpose of this study was to develop a measure of individual perceptions of just culture for a hospital setting. Methods The 27 item survey was administered to 998 members of a healthcare staff in a pediatric research hospital as part of the hospital's ongoing patient safety culture assessment process. Subscales included balancing a blame-free approach with accountability, feedback and communication, openness of communication, quality of the event reporting process, continuous improvement, and trust. The final sample of 404 participants (40% response rate) included nurses, physicians, pharmacists and other hospital staff members involved in patient care. Confirmatory factor analysis was used to test the internal structure of the measure and reliability analyses were conducted on the subscales. Results Moderate support for the factor structure was established with confirmatory factor analysis. After modifications were made to improve statistical fit, the final version of the measure included six subscales loading onto one higher-order dimension. Additionally, Cronbach's alpha reliability scores for the subscales were positive, with each dimension being above 0.7 with the exception of one. Conclusions The instrument designed and tested in this study demonstrated adequate structure and reliability. Given the uniqueness of the current sample, further verification of the JCAT is needed from hospitals that serve broader populations. A validated tool could also be used to evaluate the relation between just culture and patient safety outcomes. PMID:24263549
Generating Customized Verifiers for Automatically Generated Code
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2008-01-01
Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.
A strategic approach for Water Safety Plans implementation in Portugal.
Vieira, Jose M P
2011-03-01
Effective risk assessment and risk management approaches in public drinking water systems can benefit from a systematic process for hazards identification and effective management control based on the Water Safety Plan (WSP) concept. Good results from WSP development and implementation in a small number of Portuguese water utilities have shown that a more ambitious nationwide strategic approach to disseminate this methodology is needed. However, the establishment of strategic frameworks for systematic and organic scaling-up of WSP implementation at a national level requires major constraints to be overcome: lack of legislation and policies and the need for appropriate monitoring tools. This study presents a framework to inform future policy making by understanding the key constraints and needs related to institutional, organizational and research issues for WSP development and implementation in Portugal. This methodological contribution for WSP implementation can be replicated at a global scale. National health authorities and the Regulator may promote changes in legislation and policies. Independent global monitoring and benchmarking are adequate tools for measuring the progress over time and for comparing the performance of water utilities. Water utilities self-assessment must include performance improvement, operational monitoring and verification. Research and education and resources dissemination ensure knowledge acquisition and transfer.
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Wrong-site nerve blocks: A systematic literature review to guide principles for prevention.
Deutsch, Ellen S; Yonash, Robert A; Martin, Donald E; Atkins, Joshua H; Arnold, Theresa V; Hunt, Christina M
2018-05-01
Wrong-site nerve blocks (WSBs) are a significant, though rare, source of perioperative morbidity. WSBs constitute the most common type of perioperative wrong-site procedure reported to the Pennsylvania Patient Safety Authority. This systematic literature review aggregates information about the incidence, patient consequences, and conditions that contribute to WSBs, as well as evidence-based methods to prevent them. A systematic search of English-language publications was performed, using the PRISMA process. Seventy English-language publications were identified. Analysis of four publications reporting on at least 10,000 blocks provides a rate of 0.52 to 5.07 WSB per 10,000 blocks, unilateral blocks, or "at risk" procedures. The most commonly mentioned potential consequence was local anesthetic toxicity. The most commonly mentioned contributory factors were time pressure, personnel factors, and lack of site-mark visibility (including no site mark placed). Components of the block process that were addressed include preoperative nerve-block verification, nerve-block site marking, time-outs, and the healthcare facility's structure and culture of safety. A lack of uniform reporting criteria and divergence in the data and theories presented may reflect the variety of circumstances affecting when and how nerve blocks are performed, as well as the infrequency of a WSB. However, multiple authors suggest three procedural steps that may help to prevent WSBs: (1) verify the nerve-block procedure using multiple sources of information, including the patient; (2) identify the nerve-block site with a visible mark; and (3) perform time-outs immediately prior to injection or instillation of the anesthetic. Hospitals, ambulatory surgical centers, and anesthesiology practices should consider creating site-verification processes with clinician input and support to develop sustainable WSB-prevention practices. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lobanov, P. D.; Usov, E. V.; Butov, A. A.; Pribaturin, N. A.; Mosunova, N. A.; Strizhov, V. F.; Chukhno, V. I.; Kutlimetov, A. E.
2017-10-01
Experiments with impulse gas injection into model coolants, such as water or the Rose alloy, performed at the Novosibirsk Branch of the Nuclear Safety Institute, Russian Academy of Sciences, are described. The test facility and the experimental conditions are presented in details. The dependence of coolant pressure on the injected gas flow and the time of injection was determined. The purpose of these experiments was to verify the physical models of thermohydraulic codes for calculation of the processes that could occur during the rupture of tubes of a steam generator with heavy liquid metal coolant or during fuel rod failure in water-cooled reactors. The experimental results were used for verification of the HYDRA-IBRAE/LM system thermohydraulic code developed at the Nuclear Safety Institute, Russian Academy of Sciences. The models of gas bubble transportation in a vertical channel that are used in the code are described in detail. A two-phase flow pattern diagram and correlations for prediction of friction of bubbles and slugs as they float up in a vertical channel and of two-phase flow friction factor are presented. Based on the results of simulation of these experiments using the HYDRA-IBRAE/LM code, the arithmetic mean error in predicted pressures was calculated, and the predictions were analyzed considering the uncertainty in the input data, geometry of the test facility, and the error of the empirical correlation. The analysis revealed major factors having a considerable effect on the predictions. The recommendations are given on updating of the experimental results and improvement of the models used in the thermohydraulic code.
Aqueous cleaning and verification processes for precision cleaning of small parts
NASA Technical Reports Server (NTRS)
Allen, Gale J.; Fishell, Kenneth A.
1995-01-01
The NASA Kennedy Space Center (KSC) Materials Science Laboratory (MSL) has developed a totally aqueous process for precision cleaning and verification of small components. In 1990 the Precision Cleaning Facility at KSC used approximately 228,000 kg (500,000 lbs) of chlorofluorocarbon (CFC) 113 in the cleaning operations. It is estimated that current CFC 113 usage has been reduced by 75 percent and it is projected that a 90 percent reduction will be achieved by the end of calendar year 1994. The cleaning process developed utilizes aqueous degreasers, aqueous surfactants, and ultrasonics in the cleaning operation and an aqueous surfactant, ultrasonics, and Total Organic Carbon Analyzer (TOCA) in the nonvolatile residue (NVR) and particulate analysis for verification of cleanliness. The cleaning and verification process is presented in its entirety, with comparison to the CFC 113 cleaning and verification process, including economic and labor costs/savings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Yin, Y; Lin, X
Purpose: To assess the preliminary feasibility of automated treatment planning verification system in cervical cancer IMRT pre-treatment dose verification. Methods: The study selected randomly clinical IMRT treatment planning data for twenty patients with cervical cancer, all IMRT plans were divided into 7 fields to meet the dosimetric goals using a commercial treatment planning system(PianncleVersion 9.2and the EclipseVersion 13.5). The plans were exported to the Mobius 3D (M3D)server percentage differences of volume of a region of interest (ROI) and dose calculation of target region and organ at risk were evaluated, in order to validate the accuracy automated treatment planning verification system.more » Results: The difference of volume for Pinnacle to M3D was less than results for Eclipse to M3D in ROI, the biggest difference was 0.22± 0.69%, 3.5±1.89% for Pinnacle and Eclipse respectively. M3D showed slightly better agreement in dose of target and organ at risk compared with TPS. But after recalculating plans by M3D, dose difference for Pinnacle was less than Eclipse on average, results were within 3%. Conclusion: The method of utilizing the automated treatment planning system to validate the accuracy of plans is convenientbut the scope of differences still need more clinical patient cases to determine. At present, it should be used as a secondary check tool to improve safety in the clinical treatment planning.« less
The Role and Quality of Software Safety in the NASA Constellation Program
NASA Technical Reports Server (NTRS)
Layman, Lucas; Basili, Victor R.; Zelkowitz, Marvin V.
2010-01-01
In this study, we examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Obtaining an accurate, program-wide picture of software safety risk is difficult across multiple, independently-developing systems. We leverage one source of safety information, hazard analysis, to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. The goal of this research is two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to quantify the level of risk presented by software in the hazard analysis. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. To quantify the importance of software, we collected metrics based on the number of software-related causes and controls of hazardous conditions. To quantify the level of risk presented by software, we created a metric scheme to measure the specificity of these software causes. We found that from 49-70% of hazardous conditions in the three systems could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. Furthermore, 10-12% of all controls were software-based. There is potential for inaccuracy in these counts, however, as software causes are not consistently scoped, and the presence of software in a cause or control is not always clear. The application of our software specificity metrics also identified risks in the hazard reporting process. In particular, we found a number of traceability risks in the hazard reports may impede verification of software and system safety.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
24 CFR 242.68 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Social Security and Employer Identification Numbers. 242.68 Section 242.68 Housing and Urban Development... Requirements § 242.68 Disclosure and verification of Social Security and Employer Identification Numbers. The requirements set forth in 24 CFR part 5, regarding the disclosure and verification of Social Security Numbers...
Guidelines for qualifying cleaning and verification materials
NASA Technical Reports Server (NTRS)
Webb, D.
1995-01-01
This document is intended to provide guidance in identifying technical issues which must be addressed in a comprehensive qualification plan for materials used in cleaning and cleanliness verification processes. Information presented herein is intended to facilitate development of a definitive checklist that should address all pertinent materials issues when down selecting a cleaning/verification media.
This protocol was developed under the Environmental Protection Agency's Environmental Technology Verification (ETV) Program, and is intended to be used as a guide in preparing laboratory test plans for the purpose of verifying the performance of grouting materials used for infra...
24 CFR 242.68 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Social Security and Employer Identification Numbers. 242.68 Section 242.68 Housing and Urban Development... Requirements § 242.68 Disclosure and verification of Social Security and Employer Identification Numbers. The requirements set forth in 24 CFR part 5, regarding the disclosure and verification of Social Security Numbers...
24 CFR 242.68 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Social Security and Employer Identification Numbers. 242.68 Section 242.68 Housing and Urban Development... Requirements § 242.68 Disclosure and verification of Social Security and Employer Identification Numbers. The requirements set forth in 24 CFR part 5, regarding the disclosure and verification of Social Security Numbers...
24 CFR 242.68 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Social Security and Employer Identification Numbers. 242.68 Section 242.68 Housing and Urban Development... Requirements § 242.68 Disclosure and verification of Social Security and Employer Identification Numbers. The requirements set forth in 24 CFR part 5, regarding the disclosure and verification of Social Security Numbers...
24 CFR 242.68 - Disclosure and verification of Social Security and Employer Identification Numbers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Social Security and Employer Identification Numbers. 242.68 Section 242.68 Housing and Urban Development... Requirements § 242.68 Disclosure and verification of Social Security and Employer Identification Numbers. The requirements set forth in 24 CFR part 5, regarding the disclosure and verification of Social Security Numbers...
Proceedings of the Twenty-Third Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1999-01-01
The Twenty-third Annual Software Engineering Workshop (SEW) provided 20 presentations designed to further the goals of the Software Engineering Laboratory (SEL) of the NASA-GSFC. The presentations were selected on their creativity. The sessions which were held on 2-3 of December 1998, centered on the SEL, Experimentation, Inspections, Fault Prediction, Verification and Validation, and Embedded Systems and Safety-Critical Systems.
The Search for Nonflammable Solvent Alternatives for Cleaning Aerospace Oxygen Systems
NASA Technical Reports Server (NTRS)
Mitchell, Mark; Lowrey, Nikki
2012-01-01
Oxygen systems are susceptible to fires caused by particle and nonvolatile residue (NVR) contaminants, therefore cleaning and verification is essential for system safety. . Cleaning solvents used on oxygen system components must be either nonflammable in pure oxygen or complete removal must be assured for system safety. . CFC -113 was the solvent of choice before 1996 because it was effective, least toxic, compatible with most materials of construction, and non ]reactive with oxygen. When CFC -113 was phased out in 1996, HCFC -225 was selected as an interim replacement for cleaning propulsion oxygen systems at NASA. HCFC-225 production phase-out date is 01/01/2015. HCFC ]225 (AK ]225G) is used extensively at Marshall Space Flight Center and Stennis Space Center for cleaning and NVR verification on large propulsion oxygen systems, and propulsion test stands and ground support equipment. . Many components are too large for ultrasonic agitation - necessary for effective aqueous cleaning and NVR sampling. . Test stand equipment must be cleaned prior to installation of test hardware. Many items must be cleaned by wipe or flush in situ where complete removal of a flammable solvent cannot be assured. The search for a replacement solvent for these applications is ongoing.
General-Purpose Heat Source Safety Verification Test Program: Edge-on flyer plate tests
NASA Astrophysics Data System (ADS)
George, T. G.
1987-03-01
The radioisotope thermoelectric generator (RTG) that will supply power for the Galileo and Ulysses space missions contains 18 General-Purpose Heat Source (GPHS) modules. The GPHS modules provide power by transmitting the heat of Pu-238 alpha-decay to an array of thermoelectric elements. Each module contains four Pu-238O2-fueled clads and generates 250 W(t). Because the possibility of a launch vehicle explosion always exists, and because such an explosion could generate a field of high-energy fragments, the fueled clads within each GPHS module must survive fragment impact. The edge-on flyer plate tests were included in the Safety Verification Test series to provide information on the module/clad response to the impact of high-energy plate fragments. The test results indicate that the edge-on impact of a 3.2-mm-thick, aluminum-alloy (2219-T87) plate traveling at 915 m/s causes the complete release of fuel from capsules contained within a bare GPHS module, and that the threshold velocity sufficient to cause the breach of a bare, simulant-fueled clad impacted by a 3.5-mm-thick, aluminum-alloy (5052-TO) plate is approximately 140 m/s.
A Study on Performance and Safety Tests of Electrosurgical Equipment
Tavakoli Golpaygani, A.; Movahedi, M.M.; Reza, M.
2016-01-01
Introduction: Modern medicine employs a wide variety of instruments with different physiological effects and measurements. Periodic verifications are routinely used in legal metrology for industrial measuring instruments. The correct operation of electrosurgical generators is essential to ensure patient’s safety and management of the risks associated with the use of high and low frequency electrical currents on human body. Material and Methods: The metrological reliability of 20 electrosurgical equipment in six hospitals (3 private and 3 public) was evaluated in one of the provinces of Iran according to international and national standards. Results: The achieved results show that HF leakage current of ground-referenced generators are more than isolated generators and the power analysis of only eight units delivered acceptable output values and the precision in the output power measurements was low. Conclusion: Results indicate a need for new and severe regulations on periodic performance verifications and medical equipment quality control program especially in high risk instruments. It is also necessary to provide training courses for operating staff in the field of meterology in medicine to be acquianted with critical parameters to get accuracy results with operation room equipment. PMID:27853725
From Bridges and Rockets, Lessons for Software Systems
NASA Technical Reports Server (NTRS)
Holloway, C. Michael
2004-01-01
Although differences exist between building software systems and building physical structures such as bridges and rockets, enough similarities exist that software engineers can learn lessons from failures in traditional engineering disciplines. This paper draws lessons from two well-known failures the collapse of the Tacoma Narrows Bridge in 1940 and the destruction of the space shuttle Challenger in 1986 and applies these lessons to software system development. The following specific applications are made: (1) the verification and validation of a software system should not be based on a single method, or a single style of methods; (2) the tendency to embrace the latest fad should be overcome; and (3) the introduction of software control into safety-critical systems should be done cautiously.
Proximal sensing for soil carbon accounting
NASA Astrophysics Data System (ADS)
England, Jacqueline R.; Viscarra Rossel, Raphael A.
2018-05-01
Maintaining or increasing soil organic carbon (C) is vital for securing food production and for mitigating greenhouse gas (GHG) emissions, climate change, and land degradation. Some land management practices in cropping, grazing, horticultural, and mixed farming systems can be used to increase organic C in soil, but to assess their effectiveness, we need accurate and cost-efficient methods for measuring and monitoring the change. To determine the stock of organic C in soil, one requires measurements of soil organic C concentration, bulk density, and gravel content, but using conventional laboratory-based analytical methods is expensive. Our aim here is to review the current state of proximal sensing for the development of new soil C accounting methods for emissions reporting and in emissions reduction schemes. We evaluated sensing techniques in terms of their rapidity, cost, accuracy, safety, readiness, and their state of development. The most suitable method for measuring soil organic C concentrations appears to be visible-near-infrared (vis-NIR) spectroscopy and, for bulk density, active gamma-ray attenuation. Sensors for measuring gravel have not been developed, but an interim solution with rapid wet sieving and automated measurement appears useful. Field-deployable, multi-sensor systems are needed for cost-efficient soil C accounting. Proximal sensing can be used for soil organic C accounting, but the methods need to be standardized and procedural guidelines need to be developed to ensure proficient measurement and accurate reporting and verification. These are particularly important if the schemes use financial incentives for landholders to adopt management practices to sequester soil organic C. We list and discuss requirements for developing new soil C accounting methods based on proximal sensing, including requirements for recording, verification, and auditing.
Closing the Certification Gaps in Adaptive Flight Control Software
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2008-01-01
Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuel, D; Testa, M; Park, Y
Purpose: In-vivo dose and beam range verification in proton therapy could play significant roles in proton treatment validation and improvements. Invivo beam range verification, in particular, could enable new treatment techniques one of which, for example, could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. We have developed and commissioned an integrated system with hardware, software and workflow protocols, to provide a complete solution, simultaneously for both in-vivo dosimetry and range verification for proton therapy. Methods: The system uses a matrix of diodes, up to 12 in total, but separablemore » into three groups for flexibility in application. A special amplifier was developed to capture extremely small signals from very low proton beam current. The software was developed within iMagX, a general platform for image processing in radiation therapy applications. The range determination exploits the inherent relationship between the internal range modulation clock of the proton therapy system and the radiological depth at the point of measurement. The commissioning of the system, for in-vivo dosimetry and for range verification was separately conducted using anthropomorphic phantom. EBT films and TLDs were used for dose comparisons and range scan of the beam distal fall-off was used as ground truth for range verification. Results: For in-vivo dose measurement, the results were in agreement with TLD and EBT films and were within 3% from treatment planning calculations. For range verification, a precision of 0.5mm is achieved in homogeneous phantoms, and a precision of 2mm for anthropomorphic pelvic phantom, except at points with significant range mixing. Conclusion: We completed the commissioning of our system for in-vivo dosimetry and range verification in proton therapy. The results suggest that the system is ready for clinical trials on patient.« less
What is the Final Verification of Engineering Requirements?
NASA Technical Reports Server (NTRS)
Poole, Eric
2010-01-01
This slide presentation reviews the process of development through the final verification of engineering requirements. The definition of the requirements is driven by basic needs, and should be reviewed by both the supplier and the customer. All involved need to agree upon a formal requirements including changes to the original requirements document. After the requirements have ben developed, the engineering team begins to design the system. The final design is reviewed by other organizations. The final operational system must satisfy the original requirements, though many verifications should be performed during the process. The verification methods that are used are test, inspection, analysis and demonstration. The plan for verification should be created once the system requirements are documented. The plan should include assurances that every requirement is formally verified, that the methods and the responsible organizations are specified, and that the plan is reviewed by all parties. The options of having the engineering team involved in all phases of the development as opposed to having some other organization continue the process once the design has been complete is discussed.
Formal Safety Certification of Aerospace Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
In principle, formal methods offer many advantages for aerospace software development: they can help to achieve ultra-high reliability, and they can be used to provide evidence of the reliability claims which can then be subjected to external scrutiny. However, despite years of research and many advances in the underlying formalisms of specification, semantics, and logic, formal methods are not much used in practice. In our opinion this is related to three major shortcomings. First, the application of formal methods is still expensive because they are labor- and knowledge-intensive. Second, they are difficult to scale up to complex systems because they are based on deep mathematical insights about the behavior of the systems (t.e., they rely on the "heroic proof"). Third, the proofs can be difficult to interpret, and typically stand in isolation from the original code. In this paper, we describe a tool for formally demonstrating safety-relevant aspects of aerospace software, which largely circumvents these problems. We focus on safely properties because it has been observed that safety violations such as out-of-bounds memory accesses or use of uninitialized variables constitute the majority of the errors found in the aerospace domain. In our approach, safety means that the program will not violate a set of rules that can range for the simple memory access rules to high-level flight rules. These different safety properties are formalized as different safety policies in Hoare logic, which are then used by a verification condition generator along with the code and logical annotations in order to derive formal safety conditions; these are then proven using an automated theorem prover. Our certification system is currently integrated into a model-based code generation toolset that generates the annotations together with the code. However, this automated formal certification technology is not exclusively constrained to our code generator and could, in principle, also be integrated with other code generators such as RealTime Workshop or even applied to legacy code. Our approach circumvents the historical problems with formal methods by increasing the degree of automation on all levels. The restriction to safety policies (as opposed to arbitrary functional behavior) results in simpler proof problems that can generally be solved by fully automatic theorem proves. An automated linking mechanism between the safety conditions and the code provides some of the traceability mandated by process standards such as DO-178B. An automated explanation mechanism uses semantic markup added by the verification condition generator to produce natural-language explanations of the safety conditions and thus supports their interpretation in relation to the code. It shows an automatically generated certification browser that lets users inspect the (generated) code along with the safety conditions (including textual explanations), and uses hyperlinks to automate tracing between the two levels. Here, the explanations reflect the logical structure of the safety obligation but the mechanism can in principle be customized using different sets of domain concepts. The interface also provides some limited control over the certification process itself. Our long-term goal is a seamless integration of certification, code generation, and manual coding that results in a "certified pipeline" in which specifications are automatically transformed into executable code, together with the supporting artifacts necessary for achieving and demonstrating the high level of assurance needed in the aerospace domain.
NASA Technical Reports Server (NTRS)
1976-01-01
System specifications to be used by the mission control center (MCC) for the shuttle orbital flight test (OFT) time frame were described. The three support systems discussed are the communication interface system (CIS), the data computation complex (DCC), and the display and control system (DCS), all of which may interfere with, and share processing facilities with other applications processing supporting current MCC programs. The MCC shall provide centralized control of the space shuttle OFT from launch through orbital flight, entry, and landing until the Orbiter comes to a stop on the runway. This control shall include the functions of vehicle management in the area of hardware configuration (verification), flight planning, communication and instrumentation configuration management, trajectory, software and consumables, payloads management, flight safety, and verification of test conditions/environment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Disclosure and verification of Social Security and Employer Identification Numbers by owners. 886.305 Section 886.305 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF THE ASSISTANT...
24 CFR 5.233 - Mandated use of HUD's Enterprise Income Verification (EIV) System.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Mandated use of HUD's Enterprise Income Verification (EIV) System. 5.233 Section 5.233 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development GENERAL HUD PROGRAM REQUIREMENTS; WAIVERS Disclosure...
24 CFR 5.240 - Family disclosure of income information to the responsible entity and verification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Family disclosure of income information to the responsible entity and verification. 5.240 Section 5.240 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development GENERAL HUD PROGRAM REQUIREMENTS...
Software verification plan for GCS. [guidance and control software
NASA Technical Reports Server (NTRS)
Dent, Leslie A.; Shagnea, Anita M.; Hayhurst, Kelly J.
1990-01-01
This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan
RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less
Design and Realization of Controllable Ultrasonic Fault Detector Automatic Verification System
NASA Astrophysics Data System (ADS)
Sun, Jing-Feng; Liu, Hui-Ying; Guo, Hui-Juan; Shu, Rong; Wei, Kai-Li
The ultrasonic flaw detection equipment with remote control interface is researched and the automatic verification system is developed. According to use extensible markup language, the building of agreement instruction set and data analysis method database in the system software realizes the controllable designing and solves the diversification of unreleased device interfaces and agreements. By using the signal generator and a fixed attenuator cascading together, a dynamic error compensation method is proposed, completes what the fixed attenuator does in traditional verification and improves the accuracy of verification results. The automatic verification system operating results confirms that the feasibility of the system hardware and software architecture design and the correctness of the analysis method, while changes the status of traditional verification process cumbersome operations, and reduces labor intensity test personnel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luke, S J
2011-12-20
This report describes a path forward for implementing information barriers in a future generic biological arms-control verification regime. Information barriers have become a staple of discussion in the area of arms control verification approaches for nuclear weapons and components. Information barriers when used with a measurement system allow for the determination that an item has sensitive characteristics without releasing any of the sensitive information. Over the last 15 years the United States (with the Russian Federation) has led on the development of information barriers in the area of the verification of nuclear weapons and nuclear components. The work of themore » US and the Russian Federation has prompted other states (e.g., UK and Norway) to consider the merits of information barriers for possible verification regimes. In the context of a biological weapons control verification regime, the dual-use nature of the biotechnology will require protection of sensitive information while allowing for the verification of treaty commitments. A major question that has arisen is whether - in a biological weapons verification regime - the presence or absence of a weapon pathogen can be determined without revealing any information about possible sensitive or proprietary information contained in the genetic materials being declared under a verification regime. This study indicates that a verification regime could be constructed using a small number of pathogens that spans the range of known biological weapons agents. Since the number of possible pathogens is small it is possible and prudent to treat these pathogens as analogies to attributes in a nuclear verification regime. This study has determined that there may be some information that needs to be protected in a biological weapons control verification regime. To protect this information, the study concludes that the Lawrence Livermore Microbial Detection Array may be a suitable technology for the detection of the genetic information associated with the various pathogens. In addition, it has been determined that a suitable information barrier could be applied to this technology when the verification regime has been defined. Finally, the report posits a path forward for additional development of information barriers in a biological weapons verification regime. This path forward has shown that a new analysis approach coined as Information Loss Analysis might need to be pursued so that a numerical understanding of how information can be lost in specific measurement systems can be achieved.« less
Design Authority in the Test Programme Definition: The Alenia Spazio Experience
NASA Astrophysics Data System (ADS)
Messidoro, P.; Sacchi, E.; Beruto, E.; Fleming, P.; Marucchi Chierro, P.-P.
2004-08-01
In addition, being the Verification and Test Programme a significant part of the spacecraft development life cycle in terms of cost and time, very often the subject of the mentioned discussion has the objective to optimize the verification campaign by possible deletion or limitation of some testing activities. The increased market pressure to reduce the project's schedule and cost is originating a dialecting process inside the project teams, involving program management and design authorities, in order to optimize the verification and testing programme. The paper introduces the Alenia Spazio experience in this context, coming from the real project life on different products and missions (science, TLC, EO, manned, transportation, military, commercial, recurrent and one-of-a-kind). Usually the applicable verification and testing standards (e.g. ECSS-E-10 part 2 "Verification" and ECSS-E-10 part 3 "Testing" [1]) are tailored to the specific project on the basis of its peculiar mission constraints. The Model Philosophy and the associated verification and test programme are defined following an iterative process which suitably combines several aspects (including for examples test requirements and facilities) as shown in Fig. 1 (from ECSS-E-10). The considered cases are mainly oriented to the thermal and mechanical verification, where the benefits of possible test programme optimizations are more significant. Considering the thermal qualification and acceptance testing (i.e. Thermal Balance and Thermal Vacuum) the lessons learned originated by the development of several satellites are presented together with the corresponding recommended approaches. In particular the cases are indicated in which a proper Thermal Balance Test is mandatory and others, in presence of more recurrent design, where a qualification by analysis could be envisaged. The importance of a proper Thermal Vacuum exposure for workmanship verification is also highlighted. Similar considerations are summarized for the mechanical testing with particular emphasis on the importance of Modal Survey, Static and Sine Vibration Tests in the qualification stage in combination with the effectiveness of Vibro-Acoustic Test in acceptance. The apparent relative importance of the Sine Vibration Test for workmanship verification in specific circumstances is also highlighted. Fig. 1. Model philosophy, Verification and Test Programme definition The verification of the project requirements is planned through a combination of suitable verification methods (in particular Analysis and Test) at the different verification levels (from System down to Equipment), in the proper verification stages (e.g. in Qualification and Acceptance).
Verification testing of the Aquionics, Inc. bersonInLine® 4250 UV System to develop the UV delivered dose flow relationship was conducted at the Parsippany-Troy Hills Wastewater Treatment Plant test site in Parsippany, New Jersey. Two full-scale reactors were mounted in series. T...
A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems
2016-03-01
insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of
Verification testing of the Ondeo Degremont, Inc. Aquaray® 40 HO VLS Disinfection System to develop the UV delivered dose flow relationship was conducted at the Parsippany-Troy Hills wastewater treatment plant test site in Parsippany, New Jersey. Three reactor modules were m...
Signature Verification Using N-tuple Learning Machine.
Maneechot, Thanin; Kitjaidure, Yuttana
2005-01-01
This research presents new algorithm for signature verification using N-tuple learning machine. The features are taken from handwritten signature on Digital Tablet (On-line). This research develops recognition algorithm using four features extraction, namely horizontal and vertical pen tip position(x-y position), pen tip pressure, and pen altitude angles. Verification uses N-tuple technique with Gaussian thresholding.
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
Certification Strategies using Run-Time Safety Assurance for Part 23 Autopilot Systems
NASA Technical Reports Server (NTRS)
Hook, Loyd R.; Clark, Matthew; Sizoo, David; Skoog, Mark A.; Brady, James
2016-01-01
Part 23 aircraft operation, and in particular general aviation, is relatively unsafe when compared to other common forms of vehicle travel. Currently, there exists technologies that could increase safety statistics for these aircraft; however, the high burden and cost of performing the requisite safety critical certification processes for these systems limits their proliferation. For this reason, many entities, including the Federal Aviation Administration, NASA, and the US Air Force, are considering new options for certification for technologies that will improve aircraft safety. Of particular interest, are low cost autopilot systems for general aviation aircraft, as these systems have the potential to positively and significantly affect safety statistics. This paper proposes new systems and techniques, leveraging run-time verification, for the assurance of general aviation autopilot systems, which would be used to supplement the current certification process and provide a viable path for near-term low-cost implementation. In addition, discussions on preliminary experimentation and building the assurance case for a system, based on these principles, is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B.
1993-03-01
This report presents the results of an oversight assessment (OA) conducted by the US Department of Energy`s (DOE) Office of Environment, Safety and Health (EH) of the operational readiness review (ORR) activities for the Cold Chemical Runs (CCRs) at the Defense Waste Processing Facility (DWPF) located at Savannah River Site (SRS). The EH OA of this facility took place concurrently with an ORR performed by the DOE Office of Environmental Restoration and Waste Management (EM). The EM ORR was conducted from September 28, 1992, through October 9, 1992, although portions of the EM ORR were extended beyond this period. Themore » EH OA evaluated the comprehensiveness and effectiveness of the EM ORR. The EH OA was designed to ascertain whether the EM ORR was thorough and demonstrated sufficient inquisitiveness to verify that the implementation of programs and procedures is adequate to assure the protection of worker safety and health. The EH OA was carried out in accordance with the protocol and procedures of the ``EH Program for Oversight Assessment of Operational Readiness Evaluations for Startups and Restarts,`` dated September 15, 1992. Based on its OA and verification of the resolution of EH OA findings, the EH OA Team believes that the startup of the CCRs may be safely begun, pending satisfactory completion and verification of the prestart findings identified by the EM ORR. The EH OA was based primarily on an evaluation of the comprehensiveness and effectiveness of the EM ORR and addressed the following areas: industrial safety, industrial hygiene, and respiratory protection; fire protection; and chemical safety. The EH OA conducted independent ``vertical-slice`` reviews to confirm EM ORR results in the areas of confined-space entry, respiratory protection, fire protection, and chemical safety.« less
Guidance and Control Software Project Data - Volume 1: Planning Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the planning documents from the GCS project. Volume 1 contains five appendices: A. Plan for Software Aspects of Certification for the Guidance and Control Software Project; B. Software Development Standards for the Guidance and Control Software Project; C. Software Verification Plan for the Guidance and Control Software Project; D. Software Configuration Management Plan for the Guidance and Control Software Project; and E. Software Quality Assurance Activities.
Challenges and Demands on Automated Software Revision
NASA Technical Reports Server (NTRS)
Bonakdarpour, Borzoo; Kulkarni, Sandeep S.
2008-01-01
In the past three decades, automated program verification has undoubtedly been one of the most successful contributions of formal methods to software development. However, when verification of a program against a logical specification discovers bugs in the program, manual manipulation of the program is needed in order to repair it. Thus, in the face of existence of numerous unverified and un- certified legacy software in virtually any organization, tools that enable engineers to automatically verify and subsequently fix existing programs are highly desirable. In addition, since requirements of software systems often evolve during the software life cycle, the issue of incomplete specification has become a customary fact in many design and development teams. Thus, automated techniques that revise existing programs according to new specifications are of great assistance to designers, developers, and maintenance engineers. As a result, incorporating program synthesis techniques where an algorithm generates a program, that is correct-by-construction, seems to be a necessity. The notion of manual program repair described above turns out to be even more complex when programs are integrated with large collections of sensors and actuators in hostile physical environments in the so-called cyber-physical systems. When such systems are safety/mission- critical (e.g., in avionics systems), it is essential that the system reacts to physical events such as faults, delays, signals, attacks, etc, so that the system specification is not violated. In fact, since it is impossible to anticipate all possible such physical events at design time, it is highly desirable to have automated techniques that revise programs with respect to newly identified physical events according to the system specification.
NASA Astrophysics Data System (ADS)
Tajedi, Noor Aqilah A.; Sukor, Nur Sabahiah A.; Ismail, Mohd Ashraf M.; Shamsudin, Shahrul A.
2017-10-01
An Emergency Response Plan (ERP) is an essential safety procedure that needs to be taken into account for railway operations, especially for underground railway networks. Several parameters need to be taken into consideration in planning an ERP such as the design of tunnels and intervention shafts, and operation procedures for underground transportation systems. Therefore, the purpose of this paper is to observe and analyse the Emergency Response Procedure (ERP) exercise for the underground train network at the LRT Kelana Jaya Line. The exercise was conducted at one of the underground intervention shaft exits, where the height of the staircase from the bottom floor to the upper floor was 24.59 metres. Four cameras were located at selected levels of the shaft, and 71 participants were assigned for the evacuation exercise. The participants were tagged with a number at the front and back of their safety vests. Ten respondents were randomly selected to give details of their height and weight and, at the same time, they had to self-record the time taken for them to evacuate from the bottom to the top of the shaft. The video footages that were taken during the ERP were analysed, and the data were used for the verification process on the buildingEXODUS simulation software. It was found that the results of the ERP experiment were significantly similar to the simulation results, thereby successfully verifying the simulation. This verification process was important to ensure that the results of the simulation were in accordance with the real situation. Therefore, a further evacuation analysis made use of the results from this verification.
Concurrent engineering research center
NASA Technical Reports Server (NTRS)
Callahan, John R.
1995-01-01
The projects undertaken by The Concurrent Engineering Research Center (CERC) at West Virginia University are reported and summarized. CERC's participation in the Department of Defense's Defense Advanced Research Project relating to technology needed to improve the product development process is described, particularly in the area of advanced weapon systems. The efforts committed to improving collaboration among the diverse and distributed health care providers are reported, along with the research activities for NASA in Independent Software Verification and Validation. CERC also takes part in the electronic respirator certification initiated by The National Institute for Occupational Safety and Health, as well as in the efforts to find a solution to the problem of producing environment-friendly end-products for product developers worldwide. The 3M Fiber Metal Matrix Composite Model Factory Program is discussed. CERC technologies, facilities,and personnel-related issues are described, along with its library and technical services and recent publications.
A New Approach to Defining Human Touch Temperature Standards
NASA Technical Reports Server (NTRS)
Ungar, Eugene; Stroud, Kenneth
2010-01-01
Defining touch temperature limits for skin contact with both hot and cold objects is important to prevent pain and skin damage, which may affect task performance or become a safety concern. Pain and skin damage depend on the skin temperature during contact, which depends on the contact thermal conductance, the object's initial temperature, and its material properties. However, previous spacecraft standards have incorrectly defined touch temperature limits in terms of a single object temperature value for all materials, or have provided limited material-specific values which do not cover the gamut of likely designs. A new approach has been developed for updated NASA standards, which defines touch temperature limits in terms of skin temperature at pain onset for bare skin contact with hot and cold objects. The authors have developed an analytical verification method for safe hot and cold object temperatures for contact times from 1 second to infinity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lala, J.H.; Nagle, G.A.; Harper, R.E.
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less
NASA's Commercial Crew Program, The Next Step in U.S. Space Transportation
NASA Technical Reports Server (NTRS)
Mango, Edward J.; Thomas, Rayelle E.
2013-01-01
The Commercial Crew Program (CCP) is leading NASA's efforts to develop the next U.S. capability for crew transportation and rescue services to and from the International Space Station (ISS) by the mid-decade timeframe. The outcome of this capability is expected to stimulate and expand the U.S. space transportation industry. NASA is relying on its decades of human space flight experience to certify U.S. crewed vehicles to the ISS and is doing so in a two phase certification approach. NASA Certification will cover all aspects of a crew transportation system, including development, test, evaluation, and verification; program management and control; flight readiness certification; launch, landing, recovery, and mission operations; sustaining engineering and maintenance/upgrades. To ensure NASA crew safety, NASA Certification will validate technical and performance requirements, verify compliance with NASA requirements, validate the crew transportation system operates in appropriate environments, and quantify residual risks.
Development and Testing of a High Stability Engine Control (HISTEC) System
NASA Technical Reports Server (NTRS)
Orme, John S.; DeLaat, John C.; Southwick, Robert D.; Gallops, George W.; Doane, Paul M.
1998-01-01
Flight tests were recently completed to demonstrate an inlet-distortion-tolerant engine control system. These flight tests were part of NASA's High Stability Engine Control (HISTEC) program. The objective of the HISTEC program was to design, develop, and flight demonstrate an advanced integrated engine control system that uses measurement-based, real-time estimates of inlet airflow distortion to enhance engine stability. With improved stability and tolerance of inlet airflow distortion, future engine designs may benefit from a reduction in design stall-margin requirements and enhanced reliability, with a corresponding increase in performance and decrease in fuel consumption. This paper describes the HISTEC methodology, presents an aircraft test bed description (including HISTEC-specific modifications) and verification and validation ground tests. Additionally, flight test safety considerations, test plan and technique design and approach, and flight operations are addressed. Some illustrative results are presented to demonstrate the type of analysis and results produced from the flight test program.
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
NASA Technical Reports Server (NTRS)
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
Progress of Ongoing NASA Lithium-Ion Cell Verification Testing for Aerospace Applications
NASA Technical Reports Server (NTRS)
McKissock, Barbara I.; Manzo, Michelle A.; Miller, Thomas B.; Reid, Concha M.; Bennett, William R.; Gemeiner, Russel
2008-01-01
A Lithium-ion Verification and Validation Program with the purpose to assess the capabilities of current aerospace lithium-ion (Li-ion) battery cells to perform in a low-earth-orbit (LEO) regime was initiated in 2002. This program involves extensive characterization and LEO life testing at ten different combinations of depth-of-discharge, temperature, and end-of-charge voltage. The test conditions selected for the life tests are defined as part of a statistically designed test matrix developed to determine the effects of operating conditions on performance and life of Li-ion cells. Results will be used to model and predict cell performance and degradation as a function of test operating conditions. Testing is being performed at the Naval Surface Warfare Center/Crane Division in Crane, Indiana. Testing was initiated in September 2004 with 40 Ah cells from Saft and 30 Ah cells from Lithion. The test program has been expanded with the addition of modules composed of 18650 cells from ABSL Power Solutions in April 2006 and the addition of 50 Ah cells from Mine Safety Appliances Co. (MSA) in June 2006. Preliminary results showing the average voltage and average available discharge capacity for the Saft and Lithion packs at the test conditions versus cycles are presented.
Development and Verification of Sputtered Thin-Film Nickel-Titanium (NiTi) Shape Memory Alloy (SMA)
2015-08-01
Shape Memory Alloy (SMA) by Cory R Knick and Christopher J Morris Approved for public release; distribution unlimited...Laboratory Development and Verification of Sputtered Thin-Film Nickel-Titanium (NiTi) Shape Memory Alloy (SMA) by Cory R Knick and Christopher
The development, verification, and comparison study between LC-MS libraries for two manufacturers’ instruments and a verified protocol are discussed. The LC-MS library protocol was verified through an inter-laboratory study that involved Federal, State, and private laboratories. ...
Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos
2016-01-01
Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451
Vallejo-Cordoba, Belinda; González-Córdova, Aarón F
2010-07-01
This review presents an overview of the applicability of CE in the analysis of chemical and biological contaminants involved in emerging food safety issues. Additionally, CE-based genetic analyzers' usefulness as a unique tool in food traceability verification systems was presented. First, analytical approaches for the determination of melamine and specific food allergens in different foods were discussed. Second, natural toxin analysis by CE was updated from the last review reported in 2008. Finally, the analysis of prion proteins associated with the "mad cow" crises and the application of CE-based genetic analyzers for meat traceability were summarized.
Formal Verification of Safety Buffers for Sate-Based Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Herencia-Zapana, Heber; Jeannin, Jean-Baptiste; Munoz, Cesar A.
2010-01-01
The information provided by global positioning systems is never totally exact, and there are always errors when measuring position and velocity of moving objects such as aircraft. This paper studies the effects of these errors in the actual separation of aircraft in the context of state-based conflict detection and resolution. Assuming that the state information is uncertain but that bounds on the errors are known, this paper provides an analytical definition of a safety buffer and sufficient conditions under which this buffer guarantees that actual conflicts are detected and solved. The results are presented as theorems, which were formally proven using a mechanical theorem prover.
ESSAA: Embedded system safety analysis assistant
NASA Technical Reports Server (NTRS)
Wallace, Peter; Holzer, Joseph; Guarro, Sergio; Hyatt, Larry
1987-01-01
The Embedded System Safety Analysis Assistant (ESSAA) is a knowledge-based tool that can assist in identifying disaster scenarios. Imbedded software issues hazardous control commands to the surrounding hardware. ESSAA is intended to work from outputs to inputs, as a complement to simulation and verification methods. Rather than treating the software in isolation, it examines the context in which the software is to be deployed. Given a specified disasterous outcome, ESSAA works from a qualitative, abstract model of the complete system to infer sets of environmental conditions and/or failures that could cause a disasterous outcome. The scenarios can then be examined in depth for plausibility using existing techniques.
Unmanned Systems Safety Guide for DoD Acquisition
2007-06-27
Weapons release authorization validation. • Weapons release verification . • Weapons release abort/back-out, including clean -up or reset of weapons...conditions, clean room, stress) and other environments (e.g. software engineering environment, electromagnetic) related to system utilization. Error 22 (1...A solid or liquid energetic substance (or a mixture of substances) which is in itself capable, OUSD (AT&L) Systems and Software Engineering
2010-11-27
analysis and verification. While at Wisconsin, Dr. Gopan was awarded the CISCO fellowship for two consecutive years. Mr. John Phillips has many years...using short (56-bit) keys for encryption (e.g., with DES or RC5) [45]. Today, it is used to understand protein folding [10]. IBM‘s World Community...Bicocca. Dipartimento di Informatica, Sistemistica e Comunicazione. Laboratorio di Test e Analisi del Software, Milano. Technical Report LTA:2004:05
2004-12-01
statutory authority for all domestic and imported food except meat , poultry , and egg products, which are under the authority of the USDA/Food Safety...Federal agencies (e.g., USDA). (Note: HHS, through the FDA, has statutory authority for all domestic and imported food except meat , poultry , and egg ...wildlife issues in disease and natural disaster issues Inspection and verification of meat , poultry , and egg products in affected areas Food
Options and Risk for Qualification of Electric Propulsion System
NASA Technical Reports Server (NTRS)
Bailey, Michelle; Daniel, Charles; Cook, Steve (Technical Monitor)
2002-01-01
Electric propulsion vehicle systems envelop a wide range of propulsion alternatives including solar and nuclear, which present unique circumstances for qualification. This paper will address the alternatives for qualification of electric propulsion spacecraft systems. The approach taken will be to address the considerations for qualification at the various levels of systems definition. Additionally, for each level of qualification the system level risk implications will be developed. Also, the paper will explore the implications of analysis verses test for various levels of systems definition, while retaining the objectives of a verification program. The limitations of terrestrial testing will be explored along with the risk and implications of orbital demonstration testing. The paper will seek to develop a template for structuring of a verification program based on cost, risk and value return. A successful verification program should establish controls and define objectives of the verification compliance program. Finally the paper will seek to address the political and programmatic factors, which may impact options for system verification.
High-speed autoverifying technology for printed wiring boards
NASA Astrophysics Data System (ADS)
Ando, Moritoshi; Oka, Hiroshi; Okada, Hideo; Sakashita, Yorihiro; Shibutani, Nobumi
1996-10-01
We have developed an automated pattern verification technique. The output of an automated optical inspection system contains many false alarms. Verification is needed to distinguish between minor irregularities and serious defects. In the past, this verification was usually done manually, which led to unsatisfactory product quality. The goal of our new automated verification system is to detect pattern features on surface mount technology boards. In our system, we employ a new illumination method, which uses multiple colors and multiple direction illumination. Images are captured with a CCD camera. We have developed a new algorithm that uses CAD data for both pattern matching and pattern structure determination. This helps to search for patterns around a defect and to examine defect definition rules. These are processed with a high speed workstation and a hard-wired circuits. The system can verify a defect within 1.5 seconds. The verification system was tested in a factory. It verified 1,500 defective samples and detected all significant defects with only a 0.1 percent of error rate (false alarm).
NASA Technical Reports Server (NTRS)
Powell, John D.
2003-01-01
This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.
Iritani, T; Koide, I; Sugimoto, Y
1997-04-01
This paper reports on a strategy to improve and renovate assembly lines, including countermeasures to prevent low back pain during the past two decades at Toyota Motor Co. Since 1975, there have been problems with low back pain at Toyota's vehicle assembly lines. To deal with these low back pain problems, it was necessary to determine their causes and to quantitatively evaluate the burden on workers. For this purpose, functional burden indexes were developed, that is, a posture burden point and a weight burden point were determined to assess the load on the low back, and a low extremity point and a squatting posture point were determined to assess the burden on the leg. The functional burden index, however, could be applied only to specific human functions, not to human functions in general. Since there are about 400 kinds of working patterns in vehicle assembly lines, comprehensive burden index was required to estimate overall burden of such work. Thus, we developed Toyota's Verification of Assembly Line (TVAL), an index for assessing the physiological stress of an assembly line work, in which an equivalent bicycle ergometer workload is calculated from electromyograms taken of 20 different muscles under actual working conditions. At present, TVAL is used to measure physiological burden of assembly work in order to give priority to improvements, and to objectively demonstrate the effects of such improvements at Toyota.
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Butler, Ricky; Narkawicz, Anthony; Maddalon, Jeffrey; Hagen, George
2010-01-01
Distributed approaches for conflict resolution rely on analyzing the behavior of each aircraft to ensure that system-wide safety properties are maintained. This paper presents the criteria method, which increases the quality and efficiency of a safety assurance analysis for distributed air traffic concepts. The criteria standard is shown to provide two key safety properties: safe separation when only one aircraft maneuvers and safe separation when both aircraft maneuver at the same time. This approach is complemented with strong guarantees of correct operation through formal verification. To show that an algorithm is correct, i.e., that it always meets its specified safety property, one must only show that the algorithm satisfies the criteria. Once this is done, then the algorithm inherits the safety properties of the criteria. An important consequence of this approach is that there is no requirement that both aircraft execute the same conflict resolution algorithm. Therefore, the criteria approach allows different avionics manufacturers or even different airlines to use different algorithms, each optimized according to their own proprietary concerns.
The U.S. Environmental Protection Agency established the Environmental Technology Verification Program to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of product performance. Research Triangl...
Handbook: Design of automated redundancy verification
NASA Technical Reports Server (NTRS)
Ford, F. A.; Hasslinger, T. W.; Moreno, F. J.
1971-01-01
The use of the handbook is discussed and the design progress is reviewed. A description of the problem is presented, and examples are given to illustrate the necessity for redundancy verification, along with the types of situations to which it is typically applied. Reusable space vehicles, such as the space shuttle, are recognized as being significant in the development of the automated redundancy verification problem.
The purpose of this SOP is to define the steps involved in data entry and data verification of physical forms. It applies to the data entry and data verification of all physical forms. The procedure defined herein was developed for use in the Arizona NHEXAS project and the "Bor...
Formal Verification for a Next-Generation Space Shuttle
NASA Technical Reports Server (NTRS)
Nelson, Stacy D.; Pecheur, Charles; Koga, Dennis (Technical Monitor)
2002-01-01
This paper discusses the verification and validation (V&2) of advanced software used for integrated vehicle health monitoring (IVHM), in the context of NASA's next-generation space shuttle. We survey the current VBCV practice and standards used in selected NASA projects, review applicable formal verification techniques, and discuss their integration info existing development practice and standards. We also describe two verification tools, JMPL2SMV and Livingstone PathFinder, that can be used to thoroughly verify diagnosis applications that use model-based reasoning, such as the Livingstone system.
Fire safety experiments on MIR Orbital Station
NASA Technical Reports Server (NTRS)
Egorov, S. D.; Belayev, A. YU.; Klimin, L. P.; Voiteshonok, V. S.; Ivanov, A. V.; Semenov, A. V.; Zaitsev, E. N.; Balashov, E. V.; Andreeva, T. V.
1995-01-01
The process of heterogeneous combustion of most materials under zero-g without forced motion of air is practically impossible. However, ventilation is required to support astronauts' life and cool equipment. The presence of ventilation flows in station compartments at accidental ignition can cause a fire. An additional, but exceedingly important parameter of the fire risk of solid materials under zero-g is the minimum air gas velocity at which the extinction of materials occurs. Therefore, the conception of fire safety can be based on temporarily lowering the intensity of ventilation and even turning it off. The information on the limiting conditions of combustion under natural conditions is needed from both scientific and practical points of view. It will enable us to judge the reliability of results of ground-based investigations and develop a conception of fire safety of inhabited sealed compartments of space stations to by provided be means of nontraditional and highly-effective methods without both employing large quantities of fire-extinguishing compounds and hard restrictions on use of polymers. In this connection, an experimental installation was created to study the process of heterogeneous combustion of solid non-metals and to determine the conditions of its extinction under microgravity. This installation was delivered to the orbital station 'Mir' and the cosmonauts Viktorenko and Kondakova performed initial experiments on it in late 1994. The experimental installation consists of a combustion chamber with an electrical systems for ignition of samples, a device for cleaning air from combustion products, an air suction unit, air pipes and a control panel. The whole experiment is controlled by telemetry and recorded with two video cameras located at two different places. Besides the picture, parameters are recorded to determine the velocity of the air flow incoming to the samples, the time points of switching on/off the devices, etc. The combustion chamber temperature is also controlled. The main objectives of experiments of this series were as follows: (1) verification of the reliability of the installation in orbital flight; (2) verification of the experimental procedure; and (3) investigation of combustion of two types of materials under microgravity at various velocities of the incoming air flow.
TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) development
NASA Technical Reports Server (NTRS)
Shimamoto, Mike S.
1993-01-01
The development of an anthropomorphic, undersea manipulator system, the TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) is described. The TOPS system's design philosophy, which results from NRaD's experience in undersea vehicles and manipulator systems development and operations, is presented. The TOPS design approach, task teams, manipulator, and vision system development and results, conclusions, and recommendations are presented.
Development and Verification of the Charring Ablating Thermal Protection Implicit System Solver
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Calvert, Nathan D.; Kirk, Benjamin S.
2010-01-01
The development and verification of the Charring Ablating Thermal Protection Implicit System Solver is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method with first and second order implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton's method, while the fully implicit linear system is solved with the Generalized Minimal Residual method. Verification results from exact solutions and the Method of Manufactured Solutions are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.
Development and Verification of the Charring, Ablating Thermal Protection Implicit System Simulator
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Calvert, Nathan; Kirk, Benjamin S.
2011-01-01
The development and verification of the Charring Ablating Thermal Protection Implicit System Solver (CATPISS) is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method (FEM) with first and second order fully implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton s method, while the linear system is solved via the Generalized Minimum Residual method (GMRES). Verification results from exact solutions and Method of Manufactured Solutions (MMS) are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.
Application Agreement and Integration Services
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Hall, Brendan; Schweiker, Kevin
2013-01-01
Application agreement and integration services are required by distributed, fault-tolerant, safety critical systems to assure required performance. An analysis of distributed and hierarchical agreement strategies are developed against the backdrop of observed agreement failures in fielded systems. The documented work was performed under NASA Task Order NNL10AB32T, Validation And Verification of Safety-Critical Integrated Distributed Systems Area 2. This document is intended to satisfy the requirements for deliverable 5.2.11 under Task 4.2.2.3. This report discusses the challenges of maintaining application agreement and integration services. A literature search is presented that documents previous work in the area of replica determinism. Sources of non-deterministic behavior are identified and examples are presented where system level agreement failed to be achieved. We then explore how TTEthernet services can be extended to supply some interesting application agreement frameworks. This document assumes that the reader is familiar with the TTEthernet protocol. The reader is advised to read the TTEthernet protocol standard [1] before reading this document. This document does not re-iterate the content of the standard.
Flight Guidance System Requirements Specification
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Tribble, Alan C.; Carlson, Timothy M.; Danielson, Eric J.
2003-01-01
This report describes a requirements specification written in the RSML-e language for the mode logic of a Flight Guidance System of a typical regional jet aircraft. This model was created as one of the first steps in a five-year project sponsored by the NASA Langley Research Center, Rockwell Collins Inc., and the Critical Systems Research Group of the University of Minnesota to develop new methods and tools to improve the safety of avionics designs. This model will be used to demonstrate the application of a variety of methods and techniques, including safety analysis of system and subsystem requirements, verification of key properties using theorem provers and model checkers, identification of potential sources mode confusion in system designs, partitioning of applications based on the criticality of system hazards, and autogeneration of avionics quality code. While this model is representative of the mode logic of a typical regional jet aircraft, it does not describe an actual or planned product. Several aspects of a full Flight Guidance System, such as recovery from failed sensors, have been omitted, and no claims are made regarding the accuracy or completeness of this specification.
NASA Astrophysics Data System (ADS)
Roed-Larsen, Trygve; Flach, Todd
The purpose of this chapter is to provide a review of existing national and international requirements for verification of greenhouse gas reductions and associated accreditation of independent verifiers. The credibility of results claimed to reduce or remove anthropogenic emissions of greenhouse gases (GHG) is of utmost importance for the success of emerging schemes to reduce such emissions. Requirements include transparency, accuracy, consistency, and completeness of the GHG data. The many independent verification processes that have developed recently now make up a quite elaborate tool kit for best practices. The UN Framework Convention for Climate Change and the Kyoto Protocol specifications for project mechanisms initiated this work, but other national and international actors also work intensely with these issues. One initiative gaining wide application is that taken by the World Business Council for Sustainable Development with the World Resources Institute to develop a "GHG Protocol" to assist companies in arranging for auditable monitoring and reporting processes of their GHG activities. A set of new international standards developed by the International Organization for Standardization (ISO) provides specifications for the quantification, monitoring, and reporting of company entity and project-based activities. The ISO is also developing specifications for recognizing independent GHG verifiers. This chapter covers this background with intent of providing a common understanding of all efforts undertaken in different parts of the world to secure the reliability of GHG emission reduction and removal activities. These verification schemes may provide valuable input to current efforts of securing a comprehensive, trustworthy, and robust framework for verification activities of CO2 capture, transport, and storage.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Requirement Specifications for a Design and Verification Unit.
ERIC Educational Resources Information Center
Pelton, Warren G.; And Others
A research and development activity to introduce new and improved education and training technology into Bureau of Medicine and Surgery training is recommended. The activity, called a design and verification unit, would be administered by the Education and Training Sciences Department. Initial research and development are centered on the…
Real-Time System Verification by Kappa-Induction
NASA Technical Reports Server (NTRS)
Pike, Lee S.
2005-01-01
We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.
Spot: A Programming Language for Verified Flight Software
NASA Technical Reports Server (NTRS)
Bocchino, Robert L., Jr.; Gamble, Edward; Gostelow, Kim P.; Some, Raphael R.
2014-01-01
The C programming language is widely used for programming space flight software and other safety-critical real time systems. C, however, is far from ideal for this purpose: as is well known, it is both low-level and unsafe. This paper describes Spot, a language derived from C for programming space flight systems. Spot aims to maintain compatibility with existing C code while improving the language and supporting verification with the SPIN model checker. The major features of Spot include actor-based concurrency, distributed state with message passing and transactional updates, and annotations for testing and verification. Spot also supports domain-specific annotations for managing spacecraft state, e.g., communicating telemetry information to the ground. We describe the motivation and design rationale for Spot, give an overview of the design, provide examples of Spot's capabilities, and discuss the current status of the implementation.
Model Checking for Verification of Interactive Health IT Systems
Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui
2015-01-01
Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166
Ocean Thermal Energy Conversion power system development. Phase I. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-12-04
This report covers the conceptual and preliminary design of closed-cycle, ammonia, ocean thermal energy conversion power plants by Westinghouse Electric Corporation. Preliminary designs for evaporator and condenser test articles (0.13 MWe size) and a 10 MWe modular experiment power system are described. Conceptual designs for 50 MWe power systems, and 100 MWe power plants are also descirbed. Design and cost algorithms were developed, and an optimized power system design at the 50 MWe size was completed. This design was modeled very closely in the test articles and in the 10 MWe Modular Application. Major component and auxiliary system design, materials,more » biofouling, control response, availability, safety and cost aspects are developed with the greatest emphasis on the 10 MWe Modular Application Power System. It is concluded that all power plant subsystems are state-of-practice and require design verification only, rather than continued research. A complete test program, which verifies the mechanical reliability as well as thermal performance, is recommended and described.« less
Sezdi, Mana
2016-01-01
A maintenance program generated through the consideration of characteristics and failures of medical equipment is an important component of technology management. However, older technology devices and newer high-tech devices cannot be efficiently managed using the same strategies because of their different characteristics. This study aimed to generate a maintenance program comprising two different strategies to increase the efficiency of device management: preventive maintenance for older technology devices and predictive maintenance for newer high-tech devices. For preventive maintenance development, 589 older technology devices were subjected to performance verification and safety testing (PVST). For predictive maintenance development, the manufacturers' recommendations were used for 134 high-tech devices. These strategies were evaluated in terms of device reliability. This study recommends the use of two different maintenance strategies for old and new devices at hospitals in developing countries. Thus, older technology devices that applied only corrective maintenance will be included in maintenance like high-tech devices.
Sezdi, Mana
2016-01-01
A maintenance program generated through the consideration of characteristics and failures of medical equipment is an important component of technology management. However, older technology devices and newer high-tech devices cannot be efficiently managed using the same strategies because of their different characteristics. This study aimed to generate a maintenance program comprising two different strategies to increase the efficiency of device management: preventive maintenance for older technology devices and predictive maintenance for newer high-tech devices. For preventive maintenance development, 589 older technology devices were subjected to performance verification and safety testing (PVST). For predictive maintenance development, the manufacturers' recommendations were used for 134 high-tech devices. These strategies were evaluated in terms of device reliability. This study recommends the use of two different maintenance strategies for old and new devices at hospitals in developing countries. Thus, older technology devices that applied only corrective maintenance will be included in maintenance like high-tech devices. PMID:27195666