Software Safety Progress in NASA
NASA Technical Reports Server (NTRS)
Radley, Charles F.
1995-01-01
NASA has developed guidelines for development and analysis of safety-critical software. These guidelines have been documented in a Guidebook for Safety Critical Software Development and Analysis. The guidelines represent a practical 'how to' approach, to assist software developers and safety analysts in cost effective methods for software safety. They provide guidance in the implementation of the recent NASA Software Safety Standard NSS-1740.13 which was released as 'Interim' version in June 1994, scheduled for formal adoption late 1995. This paper is a survey of the methods in general use, resulting in the NASA guidelines for safety critical software development and analysis.
Certification Processes for Safety-Critical and Mission-Critical Aerospace Software
NASA Technical Reports Server (NTRS)
Nelson, Stacy
2003-01-01
This document is a quick reference guide with an overview of the processes required to certify safety-critical and mission-critical flight software at selected NASA centers and the FAA. Researchers and software developers can use this guide to jumpstart their understanding of how to get new or enhanced software onboard an aircraft or spacecraft. The introduction contains aerospace industry definitions of safety and safety-critical software, as well as, the current rationale for certification of safety-critical software. The Standards for Safety-Critical Aerospace Software section lists and describes current standards including NASA standards and RTCA DO-178B. The Mission-Critical versus Safety-Critical software section explains the difference between two important classes of software: safety-critical software involving the potential for loss of life due to software failure and mission-critical software involving the potential for aborting a mission due to software failure. The DO-178B Safety-critical Certification Requirements section describes special processes and methods required to obtain a safety-critical certification for aerospace software flying on vehicles under auspices of the FAA. The final two sections give an overview of the certification process used at Dryden Flight Research Center and the approval process at the Jet Propulsion Lab (JPL).
Software Safety Risk in Legacy Safety-Critical Computer Systems
NASA Technical Reports Server (NTRS)
Hill, Janice; Baggs, Rhoda
2007-01-01
Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.
Product Engineering Class in the Software Safety Risk Taxonomy for Building Safety-Critical Systems
NASA Technical Reports Server (NTRS)
Hill, Janice; Victor, Daniel
2008-01-01
When software safety requirements are imposed on legacy safety-critical systems, retrospective safety cases need to be formulated as part of recertifying the systems for further use and risks must be documented and managed to give confidence for reusing the systems. The SEJ Software Development Risk Taxonomy [4] focuses on general software development issues. It does not, however, cover all the safety risks. The Software Safety Risk Taxonomy [8] was developed which provides a construct for eliciting and categorizing software safety risks in a straightforward manner. In this paper, we present extended work on the taxonomy for safety that incorporates the additional issues inherent in the development and maintenance of safety-critical systems with software. An instrument called a Software Safety Risk Taxonomy Based Questionnaire (TBQ) is generated containing questions addressing each safety attribute in the Software Safety Risk Taxonomy. Software safety risks are surfaced using the new TBQ and then analyzed. In this paper we give the definitions for the specialized Product Engineering Class within the Software Safety Risk Taxonomy. At the end of the paper, we present the tool known as the 'Legacy Systems Risk Database Tool' that is used to collect and analyze the data required to show traceability to a particular safety standard
NASA Technical Reports Server (NTRS)
Rosenberg, Linda
1997-01-01
If software is a critical element in a safety critical system, it is imperative to implement a systematic approach to software safety as an integral part of the overall system safety programs. The NASA-STD-8719.13A, "NASA Software Safety Standard", describes the activities necessary to ensure that safety is designed into software that is acquired or developed by NASA, and that safety is maintained throughout the software life cycle. A PDF version, is available on the WWW from Lewis. A Guidebook that will assist in the implementation of the requirements in the Safety Standard is under development at the Lewis Research Center (LeRC). After completion, it will also be available on the WWW from Lewis.
Traceability of Software Safety Requirements in Legacy Safety Critical Systems
NASA Technical Reports Server (NTRS)
Hill, Janice L.
2007-01-01
How can traceability of software safety requirements be created for legacy safety critical systems? Requirements in safety standards are imposed most times during contract negotiations. On the other hand, there are instances where safety standards are levied on legacy safety critical systems, some of which may be considered for reuse for new applications. Safety standards often specify that software development documentation include process-oriented and technical safety requirements, and also require that system and software safety analyses are performed supporting technical safety requirements implementation. So what can be done if the requisite documents for establishing and maintaining safety requirements traceability are not available?
NASA's Software Safety Standard
NASA Technical Reports Server (NTRS)
Ramsay, Christopher M.
2005-01-01
NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.
A Software Safety Risk Taxonomy for Use in Retrospective Safety Cases
NASA Technical Reports Server (NTRS)
Hill, Janice L.
2007-01-01
Safety standards contain technical and process-oriented safely requirements. The best time to include these requirements is early in the development lifecycle of the system. When software safety requirements are levied on a legacy system after the fact, a retrospective safety case will need to be constructed for the software in the system. This can be a difficult task because there may be few to no art facts available to show compliance to the software safely requirements. The risks associated with not meeting safely requirements in a legacy safely-critical computer system must be addressed to give confidence for reuse. This paper introduces a proposal for a software safely risk taxonomy for legacy safely-critical computer systems, by specializing the Software Engineering Institute's 'Software Development Risk Taxonomy' with safely elements and attributes.
Software Design Improvements. Part 2; Software Quality and the Design and Inspection Process
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
The application of assurance engineering techniques improves the duration of failure-free performance of software. The totality of features and characteristics of a software product are what determine its ability to satisfy customer needs. Software in safety-critical systems is very important to NASA. We follow the System Safety Working Groups definition for system safety software as: 'The optimization of system safety in the design, development, use and maintenance of software and its integration with safety-critical systems in an operational environment. 'If it is not safe, say so' has become our motto. This paper goes over methods that have been used by NASA to make software design improvements by focusing on software quality and the design and inspection process.
Software Dependability and Safety Evaluations ESA's Initiative
NASA Astrophysics Data System (ADS)
Hernek, M.
ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].
Generalized implementation of software safety policies
NASA Technical Reports Server (NTRS)
Knight, John C.; Wika, Kevin G.
1994-01-01
As part of a research program in the engineering of software for safety-critical systems, we are performing two case studies. The first case study, which is well underway, is a safety-critical medical application. The second, which is just starting, is a digital control system for a nuclear research reactor. Our goal is to use these case studies to permit us to obtain a better understanding of the issues facing developers of safety-critical systems, and to provide a vehicle for the assessment of research ideas. The case studies are not based on the analysis of existing software development by others. Instead, we are attempting to create software for new and novel systems in a process that ultimately will involve all phases of the software lifecycle. In this abstract, we summarize our results to date in a small part of this project, namely the determination and classification of policies related to software safety that must be enforced to ensure safe operation. We hypothesize that this classification will permit a general approach to the implementation of a policy enforcement mechanism.
NASA's Software Safety Standard
NASA Technical Reports Server (NTRS)
Ramsay, Christopher M.
2007-01-01
NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.
Software-Based Safety Systems in Space - Learning from other Domains
NASA Astrophysics Data System (ADS)
Klicker, M.; Putzer, H.
2012-01-01
Increasing complexity and new emerging capabilities for manned and unmanned missions have been the hallmark of the past decades of space exploration. One of the drivers in this process was the ever increasing use of software and software-intensive systems to implement system functions necessary to the capabilities needed. The course of technological evolution suggests that this development will continue well into the future with a number of challenges for the safety community some of which shall be discussed in this paper. The current state of the art reveals a number of problems with developing and assessing safety critical software which explains the reluctance of the space community to rely on software-based safety measures to mitigate hazards. Among others, usually lack of trustworthy evidence of software integrity in all foreseeable situations and the difficulties to integrate software in the traditional safety analysis framework are cited. Experience from other domains and recent developments in modern software development methodologies and verification techniques are analysed for the suitability for space systems and an avionics architectural framework (see STANAG 4626) for the implementation of safety critical software is proposed. This is shown to create among other features the possibility of numerous degradation modes enhancing overall system safety and interoperability of computerized space systems. It also potentially simplifies international cooperation on a technical level by introducing a higher degree of compatibility. As software safety cannot be tested or argued into a system in hindsight, the development process and especially the architecture chosen are essential to establish safety properties for the software used to implement safety functions. The core of the safety argument revolves around the separation of different functions and software modules from each other by minimal coupling of functions and credible separation mechanisms in the architecture combined with rigorous development methodologies for the software itself.
Testing of Safety-Critical Software Embedded in an Artificial Heart
NASA Astrophysics Data System (ADS)
Cha, Sungdeok; Jeong, Sehun; Yoo, Junbeom; Kim, Young-Gab
Software is being used more frequently to control medical devices such as artificial heart or robotic surgery system. While much of software safety issues in such systems are similar to other safety-critical systems (e.g., nuclear power plants), domain-specific properties may warrant development of customized techniques to demonstrate fitness of the system on patients. In this paper, we report results of a preliminary analysis done on software controlling a Hybrid Ventricular Assist Device (H-VAD) developed by Korea Artificial Organ Centre (KAOC). It is a state-of-the-art artificial heart which completed animal testing phase. We performed software testing in in-vitro experiments and animal experiments. An abnormal behaviour, never detected during extensive in-vitro analysis and animal testing, was found.
Four Pillars for Improving the Quality of Safety-Critical Software-Reliant Systems
2013-04-01
Studies of safety-critical software-reliant systems developed using the current practices of build-then-test show that requirements and architecture ... design defects make up approximately 70% of all defects, many system level related to operational quality attributes, and 80% of these defects are
Applying formal methods and object-oriented analysis to existing flight software
NASA Technical Reports Server (NTRS)
Cheng, Betty H. C.; Auernheimer, Brent
1993-01-01
Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.
NASA Technical Reports Server (NTRS)
Graydon, Patrick J.; Holloway, C. M.
2015-01-01
Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers.
NASA Astrophysics Data System (ADS)
Dulo, D. A.
Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software
NASA Technical Reports Server (NTRS)
Graydon, Patrick J.; Holloway, C. Michael
2015-01-01
We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued.
GPM Timeline Inhibits For IT Processing
NASA Technical Reports Server (NTRS)
Dion, Shirley K.
2014-01-01
The Safety Inhibit Timeline Tool was created as one approach to capturing and understanding inhibits and controls from IT through launch. Global Precipitation Measurement (GPM) Mission, which launched from Japan in March 2014, was a joint mission under a partnership between the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM was one of the first NASA Goddard in-house programs that extensively used software controls. Using this tool during the GPM buildup allowed a thorough review of inhibit and safety critical software design for hazardous subsystems such as the high gain antenna boom, solar array, and instrument deployments, transmitter turn-on, propulsion system release, and instrument radar turn-on. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As a result of this process, a new tool safety inhibit timeline was created for management of inhibits and their controls during spacecraft buildup and testing during IT at GSFC and at the launch range in Japan. The Safety Inhibit Timeline Tool was a pathfinder approach for reviewing software that controls the electrical inhibits. The Safety Inhibit Timeline Tool strengthens the Safety Analysts understanding of the removal of inhibits during the IT process with safety critical software. With this tool, the Safety Analyst can confirm proper safe configuration of a spacecraft during each IT test, track inhibit and software configuration changes, and assess software criticality. In addition to understanding inhibits and controls during IT, the tool allows the Safety Analyst to better communicate to engineers and management the changes in inhibit states with each phase of hardware and software testing and the impact of safety risks. Lessons learned from participating in the GPM campaign at NASA and JAXA will be discussed during this session.
V&V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and Validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission-critical software. V&V is a systems engineering discipline that evaluates the software in a systems context, and is currently applied during the development of a specific application system. In order to bring the effectiveness of V&V to bear within reuse-based software engineering, V&V must be incorporated within the domain engineering process.
Software development for safety-critical medical applications
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
There are many computer-based medical applications in which safety and not reliability is the overriding concern. Reduced, altered, or no functionality of such systems is acceptable as long as no harm is done. A precise, formal definition of what software safety means is essential, however, before any attempt can be made to achieve it. Without this definition, it is not possible to determine whether a specific software entity is safe. A set of definitions pertaining to software safety will be presented and a case study involving an experimental medical device will be described. Some new techniques aimed at improving software safety will also be discussed.
Agile Methods for Open Source Safety-Critical Software
Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-01-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545
Agile Methods for Open Source Safety-Critical Software.
Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-08-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.
Concept Development for Software Health Management
NASA Technical Reports Server (NTRS)
Riecks, Jung; Storm, Walter; Hollingsworth, Mark
2011-01-01
This report documents the work performed by Lockheed Martin Aeronautics (LM Aero) under NASA contract NNL06AA08B, delivery order NNL07AB06T. The Concept Development for Software Health Management (CDSHM) program was a NASA funded effort sponsored by the Integrated Vehicle Health Management Project, one of the four pillars of the NASA Aviation Safety Program. The CD-SHM program focused on defining a structured approach to software health management (SHM) through the development of a comprehensive failure taxonomy that is used to characterize the fundamental failure modes of safety-critical software.
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2016-01-01
To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.
Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1993-01-01
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
A Validation Metrics Framework for Safety-Critical Software-Intensive Systems
2009-03-01
so does its definition, tools, and techniques, including means for measuring the validation activity, its outputs, and impact on development...independent of the SDLP. When considering the above SDLPs from the safety engineering team’s perspective, there are also large impacts on the way... impact . Interpretation of any actionable metric data will need to be undertaken in the context of the SDLP. 2. Safety Input The software safety
Certification of COTS Software in NASA Human Rated Flight Systems
NASA Technical Reports Server (NTRS)
Goforth, Andre
2012-01-01
Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA s own standards. For software that is from COTS sources, and, in general from 3rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3rd party software in safety critical systems and cover recent efforts to use COTS software in NASA s Multi-Purpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3rd party software.
Development of a software safety process and a case study of its use
NASA Technical Reports Server (NTRS)
Knight, John C.
1993-01-01
The goal of this research is to continue the development of a comprehensive approach to software safety and to evaluate the approach with a case study. The case study is a major part of the project, and it involves the analysis of a specific safety-critical system from the medical equipment domain. The particular application being used was selected because of the availability of a suitable candidate system. We consider the results to be generally applicable and in no way particularly limited by the domain. The research is concentrating on issues raised by the specification and verification phases of the software lifecycle since they are central to our previously-developed rigorous definitions of software safety. The theoretical research is based on our framework of definitions for software safety. In the area of specification, the main topics being investigated are the development of techniques for building system fault trees that correctly incorporate software issues and the development of rigorous techniques for the preparation of software safety specifications. The research results are documented. Another area of theoretical investigation is the development of verification methods tailored to the characteristics of safety requirements. Verification of the correct implementation of the safety specification is central to the goal of establishing safe software. The empirical component of this research is focusing on a case study in order to provide detailed characterizations of the issues as they appear in practice, and to provide a testbed for the evaluation of various existing and new theoretical results, tools, and techniques. The Magnetic Stereotaxis System is summarized.
2014-12-01
appears that UML is becoming the de facto MBD language. OMG® states the following on the MDA® FAQ page: “Although not formally required [for MBD], UML...a known limitation [42], so UML users should plan accordingly, especially for safety-critical programs. For example, “models are not used to...description of the MBD tool chain can be produced. That description could be resident in a Plan for Software Aspects of Certification (PSAC) or Software
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
Autonomy Software: V&V Challenges and Characteristics
NASA Technical Reports Server (NTRS)
Schumann, Johann; Visser, Willem
2006-01-01
The successful operation of unmanned air vehicles requires software with a high degree of autonomy. Only if high level functions can be carried out without human control and intervention, complex missions in a changing and potentially unknown environment can be carried out successfully. Autonomy software is highly mission and safety critical: failures, caused by flaws in the software cannot only jeopardize the mission, but could also endanger human life (e.g., a crash of an UAV in a densely populated area). Due to its large size, high complexity, and use of specialized algorithms (planner, constraint-solver, etc.), autonomy software poses specific challenges for its verification, validation, and certification. -- - we have carried out a survey among researchers aid scientists at NASA to study these issues. In this paper, we will present major results of this study, discussing the broad spectrum. of notions and characteristics of autonomy software and its challenges for design and development. A main focus of this survey was to evaluate verification and validation (V&V) issues and challenges, compared to the development of "traditional" safety-critical software. We will discuss important issues in V&V of autonomous software and advanced V&V tools which can help to mitigate software risks. Results of this survey will help to identify and understand safety concerns in autonomy software and will lead to improved strategies for mitigation of these risks.
Mitigating Motion Base Safety Issues: The NASA LaRC CMF Implementation
NASA Technical Reports Server (NTRS)
Bryant, Richard B., Jr.; Grupton, Lawrence E.; Martinez, Debbie; Carrelli, David J.
2005-01-01
The NASA Langley Research Center (LaRC), Cockpit Motion Facility (CMF) motion base design has taken advantage of inherent hydraulic characteristics to implement safety features using hardware solutions only. Motion system safety has always been a concern and its implementation is addressed differently by each organization. Some approaches rely heavily on software safety features. Software which performs safety functions is subject to more scrutiny making its approval, modification, and development time consuming and expensive. The NASA LaRC's CMF motion system is used for research and, as such, requires that the software be updated or modified frequently. The CMF's customers need the ability to update the simulation software frequently without the associated cost incurred with safety critical software. This paper describes the CMF engineering team's approach to achieving motion base safety by designing and implementing all safety features in hardware, resulting in applications software (including motion cueing and actuator dynamic control) being completely independent of the safety devices. This allows the CMF safety systems to remain intact and unaffected by frequent research system modifications.
Building Safer Systems With SpecTRM
NASA Technical Reports Server (NTRS)
2003-01-01
System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.
Software Safety Risk in Legacy Safety-Critical Computer Systems
NASA Technical Reports Server (NTRS)
Hill, Janice L.; Baggs, Rhoda
2007-01-01
Safety Standards contain technical and process-oriented safety requirements. Technical requirements are those such as "must work" and "must not work" functions in the system. Process-Oriented requirements are software engineering and safety management process requirements. Address the system perspective and some cover just software in the system > NASA-STD-8719.13B Software Safety Standard is the current standard of interest. NASA programs/projects will have their own set of safety requirements derived from the standard. Safety Cases: a) Documented demonstration that a system complies with the specified safety requirements. b) Evidence is gathered on the integrity of the system and put forward as an argued case. [Gardener (ed.)] c) Problems occur when trying to meet safety standards, and thus make retrospective safety cases, in legacy safety-critical computer systems.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
Automated Transfer Vehicle (ATV) Critical Safety Software Overview
NASA Astrophysics Data System (ADS)
Berthelier, D.
2002-01-01
The European Automated Transfer Vehicle is an unmanned transportation system designed to dock to International Space Station (ISS) and to contribute to the logistic servicing of the ISS. Concisely, ATV control is realized by a nominal flight control function (using computers, softwares, sensors, actuators). In order to cover the extreme situations where this nominal chain can not ensure safe trajectory with respect to ISS, a segregated proximity flight safety function is activated, where unsafe free drift trajectories can be encountered. This function relies notably on a segregated computer, the Monitoring and Safing Unit (MSU) ; in case of major ATV malfunction detection, ATV is then controlled by MSU software. Therefore, this software is critical because a MSU software failure could result in catastrophic consequences. This paper provides an overview both of this software functions and of the software development and validation method which is specific considering its criticality. First part of the paper describes briefly the proximity flight safety chain. Second part deals with the software functions. Indeed, MSU software is in charge of monitoring nominal computers and ATV corridors, using its own navigation algorithms, and, if an abnormal situation is detected, it is in charge of the ATV control during the Collision Avoidance Manoeuvre (CAM) consisting in an attitude controlled braking boost, followed by a Post-CAM manoeuvre : a Sun-pointed ATV attitude control during up to 24 hours on a safe trajectory. Monitoring, navigation and control algorithms principles are presented. Third part of this paper describes the development and validation process : algorithms functional studies , ADA coding and unit validations ; algorithms ADA code integration and validation on a specific non real-time MATLAB/SIMULINK simulator ; global software functional engineering phase, architectural design, unit testing, integration and validation on target computer.
MISSION: Mission and Safety Critical Support Environment. Executive overview
NASA Technical Reports Server (NTRS)
Mckay, Charles; Atkinson, Colin
1992-01-01
For mission and safety critical systems it is necessary to: improve definition, evolution and sustenance techniques; lower development and maintenance costs; support safe, timely and affordable system modifications; and support fault tolerance and survivability. The goal of the MISSION project is to lay the foundation for a new generation of integrated systems software providing a unified infrastructure for mission and safety critical applications and systems. This will involve the definition of a common, modular target architecture and a supporting infrastructure.
ERIC Educational Resources Information Center
Drachova-Strang, Svetlana V.
2013-01-01
As computing becomes ubiquitous, software correctness has a fundamental role in ensuring the safety and security of the systems we build. To design and develop software correctly according to their formal contracts, CS students, the future software practitioners, need to learn a critical set of skills that are necessary and sufficient for…
The Application of V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward
1996-01-01
Verification and Validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In reuse-based software engineering, decisions on the requirements, design and even implementation of domain assets can can be made prior to beginning development of a specific system. in order to bring the effectiveness of V&V to bear within reuse-based software engineering. V&V must be incorporated within the domain engineering process.
Data systems and computer science: Software Engineering Program
NASA Technical Reports Server (NTRS)
Zygielbaum, Arthur I.
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.
Evolution of safety-critical requirements post-launch
NASA Technical Reports Server (NTRS)
Lutz, R. R.; Mikulski, I. C.
2001-01-01
This paper reports the results of a small study of requirements changes to the onboard software of three spacecraft subsequent to launch. Only those requirement changes that resulted from post-launch anoma-lies (i.e., during operations) were of interest here, since the goal was to better understand the relation-ship between critical anomalies during operations and how safety-critical requirements evolve. The results of the study were surprising in that anomaly-driven, post-launch requirements changes were rarely due to previous requirements having been incorrect. Instead, changes involved new requirements (1) for the software to handle rare events or (2) for the software to compensate for hardware failures or limitations. The prevalence of new requirements as a result of post-launch anomalies suggests a need for increased requirements-engineering support of maintenance activities in these systems. The results also confirm both the difficulty and the benefits of pursuing requirements completeness, especially in terms of fault tolerance, during development of critical systems.
Implementing Software Safety in the NASA Environment
NASA Technical Reports Server (NTRS)
Wetherholt, Martha S.; Radley, Charles F.
1994-01-01
Until recently, NASA did not consider allowing computers total control of flight systems. Human operators, via hardware, have constituted the ultimate safety control. In an attempt to reduce costs, NASA has come to rely more and more heavily on computers and software to control space missions. (For example. software is now planned to control most of the operational functions of the International Space Station.) Thus the need for systematic software safety programs has become crucial for mission success. Concurrent engineering principles dictate that safety should be designed into software up front, not tested into the software after the fact. 'Cost of Quality' studies have statistics and metrics to prove the value of building quality and safety into the development cycle. Unfortunately, most software engineers are not familiar with designing for safety, and most safety engineers are not software experts. Software written to specifications which have not been safety analyzed is a major source of computer related accidents. Safer software is achieved step by step throughout the system and software life cycle. It is a process that includes requirements definition, hazard analyses, formal software inspections, safety analyses, testing, and maintenance. The greatest emphasis is placed on clearly and completely defining system and software requirements, including safety and reliability requirements. Unfortunately, development and review of requirements are the weakest link in the process. While some of the more academic methods, e.g. mathematical models, may help bring about safer software, this paper proposes the use of currently approved software methodologies, and sound software and assurance practices to show how, to a large degree, safety can be designed into software from the start. NASA's approach today is to first conduct a preliminary system hazard analysis (PHA) during the concept and planning phase of a project. This determines the overall hazard potential of the system to be built. Shortly thereafter, as the system requirements are being defined, the second iteration of hazard analyses takes place, the systems hazard analysis (SHA). During the systems requirements phase, decisions are made as to what functions of the system will be the responsibility of software. This is the most critical time to affect the safety of the software. From this point, software safety analyses as well as software engineering practices are the main focus for assuring safe software. While many of the steps proposed in this paper seem like just sound engineering practices, they are the best technical and most cost effective means to assure safe software within a safe system.
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
Analyzing Software Errors in Safety-Critical Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1994-01-01
This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.
The Need for V&V in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
14 CFR 417.123 - Computing systems and software.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...
14 CFR 417.123 - Computing systems and software.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...
14 CFR 417.123 - Computing systems and software.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...
14 CFR 417.123 - Computing systems and software.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...
14 CFR 417.123 - Computing systems and software.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Stephen B.
2010-01-01
Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smidts, Carol; Huang, Funqun; Li, Boyuan
With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
Formal Modeling and Analysis of a Preliminary Small Aircraft Transportation System (SATS)Concept
NASA Technical Reports Server (NTRS)
Carrreno, Victor A.; Gottliebsen, Hanne; Butler, Ricky; Kalvala, Sara
2004-01-01
New concepts for automating air traffic management functions at small non-towered airports raise serious safety issues associated with the software implementations and their underlying key algorithms. The criticality of such software systems necessitates that strong guarantees of the safety be developed for them. In this paper we present a formal method for modeling and verifying such systems using the PVS theorem proving system. The method is demonstrated on a preliminary concept of operation for the Small Aircraft Transportation System (SATS) project at NASA Langley.
Automated Transfer Vehicle Proximity Flight Safety Overview
NASA Astrophysics Data System (ADS)
Cornier, Dominique; Berthelier, David; Requiston, Helene; Zekri, Eric; Chase, Richard
2005-12-01
The European Automated Transfer Vehicle (ATV) is an unmanned transportation spacecraft designed to contribute to the logistic servicing of the ISS. The ATV will be launched by ARIANE 5 and, after phasing and rendezvous maneuvers, it autonomously docks to the International Space Station (ISS).The ATV control is nominally handled by the Guidance, Navigation and Control (GNC) function using computers, software, sensors and actuators. During rendezvous operations, in order to cover the extreme situations where the GNC function fails to ensure a safe trajectory with respect to the ISS, a segregated Proximity Flight Safety (PFS) function is activated : this function will initiate a collision avoidance maneuver which will place the ATV on a trajectory ensuring safety with respect to the ISS. The PFS function relies on segregated computers, the Monitoring and Safing Units (MSUs) running specific software, on four dedicated thrusters, on dedicated batteries and on specific interfaces with ATV gyrometers.The PFS function being the ultimate protection to ensure ISS safety in case of ATV malfunction, specific rules have been applied to its implementation, in particular for the development of the MSU software, which is critical since any failure of this software may result in catastrophic consequences.This paper provides an overview of the ATV Proximity Flight Safety function. After a short description of the overall ATV avionics architecture and its rationale, the second part of the paper presents more details on the PFS function both in terms of hardware and software implementation. The third part of the paper is dedicated to the MSU software validation method that is specific considering its criticality. The last part of the paper provides information on the different operations related to the use of the PFS function during an ATV flight.
Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control
NASA Technical Reports Server (NTRS)
Schumann, Johann; Mbaya, Timmy; Menghoel, Ole
2011-01-01
Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.
Issues in Software System Safety: Polly Ann Smith Co. versus Ned I. Ludd
NASA Technical Reports Server (NTRS)
Holloway, C. Michael
2002-01-01
This paper is a work of fiction, but it is fiction with a very real purpose: to stimulate careful thought and friendly discussion about some questions for which thought is often careless and discussion is often unfriendly. To accomplish this purpose, the paper creates a fictional legal case. The most important issue in this fictional case is whether certain proffered expert testimony about software engineering for safety critical systems should be admitted. Resolving this issue requires deciding the extent to which current practices and research in software engineering, especially for safety-critical systems, can rightly be considered based on knowledge, rather than opinion.
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Software System Safety and the NASA Aeronautics Blueprint
NASA Technical Reports Server (NTRS)
Holloway, C. Michael; Hayhurst, Kelly J.
2002-01-01
NASA's Aeronautics Blueprint lays out a research agenda for the Agency s aeronautics program. The word software appears only four times in this Blueprint, but the critical importance of safe and correct software to the fulfillment of the proposed research is evident on almost every page. Most of the technology solutions proposed to address challenges in aviation are software dependent technologies. Of the fifty-two specific technology solutions described in the Blueprint, forty-one depend, at least in part, on software for success. For thirty-five of these forty-one, software is not only critical to success, but also to human safety. That is, implementing the technology solutions will require using software in such a way that it may, if not specified, designed, and implemented properly, lead to fatal accidents. These results have at least two implications for the research based on the Blueprint: (1) knowledge about the current state-of-the-art and state-of-the-practice in software engineering and software system safety is essential, and (2) research into current unsolved problems in these software disciplines is also essential.
Tools Ensure Reliability of Critical Software
NASA Technical Reports Server (NTRS)
2012-01-01
In November 2006, after attempting to make a routine maneuver, NASA's Mars Global Surveyor (MGS) reported unexpected errors. The onboard software switched to backup resources, and a 2-day lapse in communication took place between the spacecraft and Earth. When a signal was finally received, it indicated that MGS had entered safe mode, a state of restricted activity in which the computer awaits instructions from Earth. After more than 9 years of successful operation gathering data and snapping pictures of Mars to characterize the planet's land and weather communication between MGS and Earth suddenly stopped. Months later, a report from NASA's internal review board found the spacecraft's battery failed due to an unfortunate sequence of events. Updates to the spacecraft's software, which had taken place months earlier, were written to the wrong memory address in the spacecraft's computer. In short, the mission ended because of a software defect. Over the last decade, spacecraft have become increasingly reliant on software to carry out mission operations. In fact, the next mission to Mars, the Mars Science Laboratory, will rely on more software than all earlier missions to Mars combined. According to Gerard Holzmann, manager at the Laboratory for Reliable Software (LaRS) at NASA's Jet Propulsion Laboratory (JPL), even the fault protection systems on a spacecraft are mostly software-based. For reasons like these, well-functioning software is critical for NASA. In the same year as the failure of MGS, Holzmann presented a new approach to critical software development to help reduce risk and provide consistency. He proposed The Power of 10: Rules for Developing Safety-Critical Code, which is a small set of rules that can easily be remembered, clearly relate to risk, and allow compliance to be verified. The reaction at JPL was positive, and developers in the private sector embraced Holzmann's ideas.
Application of SAE ARP4754A to Flight Critical Systems
NASA Technical Reports Server (NTRS)
Peterson, Eric M.
2015-01-01
This report documents applications of ARP4754A to the development of modern computer-based (i.e., digital electronics, software and network-based) aircraft systems. This study is to offer insight and provide educational value relative to the guidelines in ARP4754A and provide an assessment of the current state-of-the- practice within industry and regulatory bodies relative to development assurance for complex and safety-critical computer-based aircraft systems.
A Practical Approach to Modified Condition/Decision Coverage
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Veerhusem, Dan S.
2001-01-01
Testing of software intended for safety-critical applications in commercial transport aircraft must achieve modified condition/decision coverage (MC/DC) of the software structure. This requirement causes anxiety for many within the aviation software community. Results of a survey of the aviation software industry indicate that many developers believe that meeting the MC/DC requirement is difficult, and the cost is exorbitant. Some of the difficulties stem, no doubt, from the scant information available on the subject. This paper provides a practical 5-step approach for assessing MC/DC for aviation software products, and an analysis of some types of errors expected to be caught when MC/DC is achieved1.
Deriving Safety Cases from Machine-Generated Proofs
NASA Technical Reports Server (NTRS)
Basir, Nurlida; Fischer, Bernd; Denney, Ewen
2009-01-01
Proofs provide detailed justification for the validity of claims and are widely used in formal software development methods. However, they are often complex and difficult to understand, because they use machine-oriented formalisms; they may also be based on assumptions that are not justified. This causes concerns about the trustworthiness of using formal proofs as arguments in safety-critical applications. Here, we present an approach to develop safety cases that correspond to formal proofs found by automated theorem provers and reveal the underlying argumentation structure and top-level assumptions. We concentrate on natural deduction proofs and show how to construct the safety cases by covering the proof tree with corresponding safety case fragments.
A Framework for Performing V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
Statechart Analysis with Symbolic PathFinder
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.
2012-01-01
We report here on our on-going work that addresses the automated analysis and test case generation for software systems modeled using multiple Statechart formalisms. The work is motivated by large programs such as NASA Exploration, that involve multiple systems that interact via safety-critical protocols and are designed with different Statechart variants. To verify these safety-critical systems, we have developed Polyglot, a framework for modeling and analysis of model-based software written using different Statechart formalisms. Polyglot uses a common intermediate representation with customizable Statechart semantics and leverages the analysis and test generation capabilities of the Symbolic PathFinder tool. Polyglot is used as follows: First, the structure of the Statechart model (expressed in Matlab Stateflow or Rational Rhapsody) is translated into a common intermediate representation (IR). The IR is then translated into Java code that represents the structure of the model. The semantics are provided as "pluggable" modules.
ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications
NASA Technical Reports Server (NTRS)
Schumann, Johann; Denney, Ewen
2006-01-01
Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.
WTEC monograph on instrumentation, control and safety systems of Canadian nuclear facilities
NASA Technical Reports Server (NTRS)
Uhrig, Robert E.; Carter, Richard J.
1993-01-01
This report updates a 1989-90 survey of advanced instrumentation and controls (I&C) technologies and associated human factors issues in the U.S. and Canadian nuclear industries carried out by a team from Oak Ridge National Laboratory (Carter and Uhrig 1990). The authors found that the most advanced I&C systems are in the Canadian CANDU plants, where the newest plant (Darlington) has digital systems in almost 100 percent of its control systems and in over 70 percent of its plant protection system. Increased emphasis on human factors and cognitive science in modern control rooms has resulted in a reduced workload for the operators and the elimination of many human errors. Automation implemented through digital instrumentation and control is effectively changing the role of the operator to that of a systems manager. The hypothesis that properly introducing digital systems increases safety is supported by the Canadian experience. The performance of these digital systems has been achieved using appropriate quality assurance programs for both hardware and software development. Recent regulatory authority review of the development of safety-critical software has resulted in the creation of isolated software modules with well defined interfaces and more formal structure in the software generation. The ability of digital systems to detect impending failures and initiate a fail-safe action is a significant safety issue that should be of special interest to nuclear utilities and regulatory authorities around the world.
Automated Reuse of Scientific Subroutine Libraries through Deductive Synthesis
NASA Technical Reports Server (NTRS)
Lowry, Michael R.; Pressburger, Thomas; VanBaalen, Jeffrey; Roach, Steven
1997-01-01
Systematic software construction offers the potential of elevating software engineering from an art-form to an engineering discipline. The desired result is more predictable software development leading to better quality and more maintainable software. However, the overhead costs associated with the formalisms, mathematics, and methods of systematic software construction have largely precluded their adoption in real-world software development. In fact, many mainstream software development organizations, such as Microsoft, still maintain a predominantly oral culture for software development projects; which is far removed from a formalism-based culture for software development. An exception is the limited domain of safety-critical software, where the high-assuiance inherent in systematic software construction justifies the additional cost. We believe that systematic software construction will only be adopted by mainstream software development organization when the overhead costs have been greatly reduced. Two approaches to cost mitigation are reuse (amortizing costs over many applications) and automation. For the last four years, NASA Ames has funded the Amphion project, whose objective is to automate software reuse through techniques from systematic software construction. In particular, deductive program synthesis (i.e., program extraction from proofs) is used to derive a composition of software components (e.g., subroutines) that correctly implements a specification. The construction of reuse libraries of software components is the standard software engineering solution for improving software development productivity and quality.
A study of software standards used in the avionics industry
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1994-01-01
Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.
From Bridges and Rockets, Lessons for Software Systems
NASA Technical Reports Server (NTRS)
Holloway, C. Michael
2004-01-01
Although differences exist between building software systems and building physical structures such as bridges and rockets, enough similarities exist that software engineers can learn lessons from failures in traditional engineering disciplines. This paper draws lessons from two well-known failures the collapse of the Tacoma Narrows Bridge in 1940 and the destruction of the space shuttle Challenger in 1986 and applies these lessons to software system development. The following specific applications are made: (1) the verification and validation of a software system should not be based on a single method, or a single style of methods; (2) the tendency to embrace the latest fad should be overcome; and (3) the introduction of software control into safety-critical systems should be done cautiously.
A USNRC perspective on the use of commercial-off-shelf software (COTS) in advanced reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, J.C.
1997-12-01
The use of commercially available digital computer systems and components in safety critical systems (nuclear power plant, military, and commercial applications) is increasing rapidly. While this paper focuses on the software aspects of the application most of these continents are applicable to the hardware aspects as well. Commercial dedication (the process of assuring that a commercial grade item will perform its intended safety function) has demonstrated benefits in cost savings and a wide base of user experience, however, care must be taken to avoid difficulties with some aspects of the dedication process such as access to vendor development information, configurationmore » management long term support, and system integration.« less
Safety Metrics for Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Leveson, Nancy G; Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Software safety - A user's practical perspective
NASA Technical Reports Server (NTRS)
Dunn, William R.; Corliss, Lloyd D.
1990-01-01
Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.
Safe and Secure Partitioning with Pikeos: Towards Integrated Modular Avionics in Space
NASA Astrophysics Data System (ADS)
Almeida, J.; Prochazka, M.
2009-05-01
This paper presents our approach to logical partitioning of spacecraft onboard software. We present PikeOS, a separation micro-kernel which applies the state-of-the- art techniques and widely recognised standards such as ARINC 653 and MILS in order to guarantee safety and security properties of partitions executing software with different criticality and confidentiality. We provide an overview of our approach, also used in the Securely Partitioning Spacecraft Computing Resources project, an ESA TRP contract, which shifts spacecraft onboard software development towards the Integrated Modular Avionics concept with relevance for dual-use military and civil missions.
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
SNAPSHOT: A MODERN, SUSTAINABLE HOLDUP MEASUREMENT SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Nathan C; Younkin, James R; Smith, Steven E
2016-01-01
SNAPSHOT is a software platform designed to eventually replace Holdup Measurement System 4 (HMS 4), which is the current state-of-the-art for acquisition and analysis of nondestructive assay measurement data for in situ nuclear materials, holdup, in support of criticality safety and material control and accounting. HMS 4 is over 10 years old and is currently unsustainable due to hardware and software incompatibilities that have arisen from advances in detector electronics, primarily updates to multi-channel analyzers (MCAs), and both computer and handheld operating systems. SNAPSHOT is a complete redesign of HMS 4 that addresses the issue of compatibility with modern MCAsmore » and operating systems and that is designed with a flexible architecture to support long-term sustainability. It also provides an updated and more user friendly interface and is being developed under an NQA 1 software quality assurance (SQA) program to facilitate site acceptance for safety-related applications. This paper provides an overview of the SNAPSHOT project including details of the software development process, the SQA program, and the architecture designed to support sustainability.« less
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; Briscoe, Jeri M.
2005-01-01
Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a manned spacecraft. These systems include spacecraft navigation and attitude control, propulsion, automated docking, vehicle health management and life support. ISAACC can integrate local critical subsystem health management with subsystems performing long term health monitoring. The ISAACC system and its relationship to ISHM will be presented.
Command and Control Software Development Memory Management
NASA Technical Reports Server (NTRS)
Joseph, Austin Pope
2017-01-01
This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
A survey of quality assurance practices in biomedical open source software projects.
Koru, Günes; El Emam, Khaled; Neisa, Angelica; Umarji, Medha
2007-05-07
Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes.
Putting Safety in the Software
NASA Technical Reports Server (NTRS)
Wetherholt, Martha S.; Berens, Kalynnda M.; Hardy, Sandra (Technical Monitor)
2001-01-01
Software is a vital component of nearly every piece of modern technology. It is not a 'sub-system', able to be separated out from the system as a whole, but a 'co-system' that controls, manipulates, or interacts with the hardware and with the end user. Software has its fingers into all the pieces of the pie. If that 'pie', the system, can lead to injury, death, loss of major equipment, or impact your business bottom line, then software safety becomes vitally important. Learning to think about software from a safety perspective is the focus of this paper. We want you to think of software as part of the safety critical system, a major part. This requires 'system thinking' - being able to grasp the whole picture. Software's contribution to modern technology is both good and potentially bad. Software allows more complex and useful devices to be built. It can also contribute to plane crashes and power outages. We want you to see software in a whole new light, see it as a contributor to system hazards, and also as a possible fix or mitigation to some of those hazards.
ASSIP Study of Real-Time Safety-Critical Embedded Software-Intensive System Engineering Practices
2008-02-01
and assessment 2. product engineering processes 3. tooling processes 6 | CMU/SEI-2008-SR-001 Slide 1 Process Standards IEC/ ISO 12207 Software...and technical effort to align with 12207 IEC/ ISO 15026 System & Software Integrity Levels Generic Safety SAE ARP 4754 Certification Considerations...Process Frameworks in revision – ISO 9001, ISO 9004 – ISO 15288/ ISO 12207 harmonization – RTCA DO-178B, MOD Standard UK 00-56/3, … • Methods & Tools
Questioning the Role of Requirements Engineering in the Causes of Safety-Critical Software Failures
NASA Technical Reports Server (NTRS)
Johnson, C. W.; Holloway, C. M.
2006-01-01
Many software failures stem from inadequate requirements engineering. This view has been supported both by detailed accident investigations and by a number of empirical studies; however, such investigations can be misleading. It is often difficult to distinguish between failures in requirements engineering and problems elsewhere in the software development lifecycle. Further pitfalls arise from the assumption that inadequate requirements engineering is a cause of all software related accidents for which the system fails to meet its requirements. This paper identifies some of the problems that have arisen from an undue focus on the role of requirements engineering in the causes of major accidents. The intention is to provoke further debate within the emerging field of forensic software engineering.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes for Digital... Software Life Cycle Processes for Digital Computer Software used in Safety Systems of Nuclear Power Plants... clarifications, the enhanced consensus practices for developing software life-cycle processes for digital...
NASA Technical Reports Server (NTRS)
Guarro, Sergio B.
2010-01-01
This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, P. J.; Westwood, R.N; Mark, R. T.
2006-07-01
The Nuclear Installations Inspectorate (NII) of the UK's Health and Safety Executive (HSE) has completed a review of their Safety Assessment Principles (SAPs) for Nuclear Installations recently. During the period of the SAPs review in 2004-2005 the designers of future UK naval reactor plant were optioneering the control and protection systems that might be implemented. Because there was insufficient regulatory guidance available in the naval sector to support this activity the Defence Nuclear Safety Regulator (DNSR) invited the NII to collaborate with the production of a guidance document that provides clarity of regulatory expectations for the production of safety casesmore » for computer based safety systems. A key part of producing regulatory expectations was identifying the relevant extant standards and sector guidance that reflect good practice. The three principal sources of such good practice were: IAEA Safety Guide NS-G-1.1 (Software for Computer Based Systems Important to Safety in Nuclear Power Plants), European Commission consensus document (Common Position of European Nuclear Regulators for the Licensing of Safety Critical Software for Nuclear Reactors) and IEC nuclear sector standards such as IEC60880. A common understanding has been achieved between the NII and DNSR and regulatory guidance developed which will be used by both NII and DNSR in the assessment of computer-based safety systems and in the further development of more detailed joint technical assessment guidance for both regulatory organisations. (authors)« less
Cyber Security Threats to Safety-Critical, Space-Based Infrastructures
NASA Astrophysics Data System (ADS)
Johnson, C. W.; Atencia Yepez, A.
2012-01-01
Space-based systems play an important role within national critical infrastructures. They are being integrated into advanced air-traffic management applications, rail signalling systems, energy distribution software etc. Unfortunately, the end users of communications, location sensing and timing applications often fail to understand that these infrastructures are vulnerable to a wide range of security threats. The following pages focus on concerns associated with potential cyber-attacks. These are important because future attacks may invalidate many of the safety assumptions that support the provision of critical space-based services. These safety assumptions are based on standard forms of hazard analysis that ignore cyber-security considerations This is a significant limitation when, for instance, security attacks can simultaneously exploit multiple vulnerabilities in a manner that would never occur without a deliberate enemy seeking to damage space based systems and ground infrastructures. We address this concern through the development of a combined safety and security risk assessment methodology. The aim is to identify attack scenarios that justify the allocation of additional design resources so that safety barriers can be strengthened to increase our resilience against security threats.
Towards A Comprehensive Consideration of Epistemic Questions in Software System Safety
NASA Technical Reports Server (NTRS)
Holloway, C. M.; Johnson, Chris W.
2009-01-01
For any software system upon which lives depend, the most important question one can ask about it is, 'How do we know the system is safe?' Despite the critical importance of this question, no widely accepted, generally applicable answer exists. Instead, debate continues to rage over the question, with theorists and practitioners quarrelling with each other and amongst themselves. This paper suggests a possible way forward towards quelling the quarrels, based on refining the critical safety question into additional questions, which may be more likely to have answers on which a consensus can be reached.
RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications
NASA Technical Reports Server (NTRS)
1992-01-01
This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.
V & V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission critical software. This paper describes the working group's success in identifying V&V tasks that could be performed in the domain engineering and transition levels of reuse-based software engineering. The primary motivation for V&V at the domain level is to provide assurance that the domain requirements are correct and that the domain artifacts correctly implement the domain requirements. A secondary motivation is the possible elimination of redundant V&V activities at the application level. The group also considered the criteria and motivation for performing V&V in domain engineering.
Fault Injection Validation of a Safety-Critical TMR Sysem
NASA Astrophysics Data System (ADS)
Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata
2016-08-01
Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.
NASA Astrophysics Data System (ADS)
Drachova-Strang, Svetlana V.
As computing becomes ubiquitous, software correctness has a fundamental role in ensuring the safety and security of the systems we build. To design and develop software correctly according to their formal contracts, CS students, the future software practitioners, need to learn a critical set of skills that are necessary and sufficient for reasoning about software correctness. This dissertation presents a systematic approach to both introducing these reasoning skills into the curriculum, and assessing how well the students have learned them. Specifically, it introduces a comprehensive Reasoning Concept Inventory (RCI) that captures the fine details of basic reasoning skills that are ideally learned across the undergraduate curriculum to reason about software correctness, to develop high quality software, and to understand why software works as specified. The RCI forms the basis for developing learning outcomes that help educators to assess the adequacy of current techniques and pinpoint necessary improvements. This dissertation contains results from experimentation and assessment over the past few years in multiple CS courses. The results show that the finer principles of mathematical reasoning of software correctness can be taught effectively and continuously improved with the help of the RCI using suitable teaching practices, and supporting methods and tools.
Software Reliability Issues Concerning Large and Safety Critical Software Systems
NASA Technical Reports Server (NTRS)
Kamel, Khaled; Brown, Barbara
1996-01-01
This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.
Assuring NASA's Safety and Mission Critical Software
NASA Technical Reports Server (NTRS)
Deadrick, Wesley
2015-01-01
What is IV&V? Independent Verification and Validation (IV&V) is an objective examination of safety and mission critical software processes and products. Independence: 3 Key parameters: Technical Independence; Managerial Independence; Financial Independence. NASA IV&V perspectives: Will the system's software: Do what it is supposed to do?; Not do what it is not supposed to do?; Respond as expected under adverse conditions?. Systems Engineering: Determines if the right system has been built and that it has been built correctly. IV&V Technical Approaches: Aligned with IEEE 1012; Captured in a Catalog of Methods; Spans the full project lifecycle. IV&V Assurance Strategy: The IV&V Project's strategy for providing mission assurance; Assurance Strategy is driven by the specific needs of an individual project; Implemented via an Assurance Design; Communicated via Assurance Statements.
Software Assurance Challenges for the Commercial Crew Program
NASA Technical Reports Server (NTRS)
Cuyno, Patrick; Malnick, Kathy D.; Schaeffer, Chad E.
2015-01-01
This paper will provide a description of some of the challenges NASA is facing in providing software assurance within the new commercial space services paradigm, namely with the Commercial Crew Program (CCP). The CCP will establish safe, reliable, and affordable access to the International Space Station (ISS) by purchasing a ride from commercial companies. The CCP providers have varying experience with software development in safety-critical space systems. NASA's role in providing effective software assurance support to the CCP providers is critical to the success of CCP. These challenges include funding multiple vehicles that execute in parallel and have different rules of engagement, multiple providers with unique proprietary concerns, providing equivalent guidance to all providers, permitting alternates to NASA standards, and a large number of diverse stakeholders. It is expected that these challenges will exist in future programs, especially if the CCP paradigm proves successful. The proposed CCP approach to address these challenges includes a risk-based assessment with varying degrees of engagement and a distributed assurance model. This presentation will describe NASA IV&V Program's software assurance support and responses to these challenges.
Review of Estelle and LOTOS with respect to critical computer applications
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
Man rated NASA space vehicles seem to represent a set of ultimate critical computer applications. These applications require a high degree of security, integrity, and safety. A variety of formal and/or precise modeling techniques are becoming available for the designer of critical systems. The design phase of the software engineering life cycle includes the modification of non-development components. A review of the Estelle and LOTOS formal description languages is presented. Details of the languages and a set of references are provided. The languages were used to formally describe some of the Open System Interconnect (OSI) protocols.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... Packard Company Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating... Software Division, OpenVMS Operating System Development Group, Including an Employee Operating Out of the..., Mission Critical Business Software Division, OpenVMS Operating System Development Group, including...
Lecture Notes on Criticality Safety Validation Using MCNP & Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less
Assurance of Fault Management: Risk-Significant Adverse Condition Awareness
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2016-01-01
Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission type, domain component, causal fault, and other key characteristics. The repository has a firm structure, initial collection of data, and an interface established for informational queries, with plans for integration within the Enterprise Architecture at NASA IVV, enabling support and accessibility across the Agency. The development of an improved workflow process for adaptive, risk-informed FM assurance is currently underway.
Power, Avionics and Software - Phase 1.0:. [Subsystem Integration Test Report
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This report describes Power, Avionics and Software (PAS) 1.0 subsystem integration testing and test results that occurred in August and September of 2013. This report covers the capabilities of each PAS assembly to meet integration test objectives for non-safety critical, non-flight, non-human-rated hardware and software development. This test report is the outcome of the first integration of the PAS subsystem and is meant to provide data for subsequent designs, development and testing of the future PAS subsystems. The two main objectives were to assess the ability of the PAS assemblies to exchange messages and to perform audio testing of both inbound and outbound channels. This report describes each test performed, defines the test, the data, and provides conclusions and recommendations.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
NASA Technical Reports Server (NTRS)
Stensrud, Kjell C.; Hamm, Dustin
2007-01-01
NASA's Johnson Space Center (JSC) / Flight Design and Dynamics Division (DM) has prototyped the use of Open Source middleware technology for building its next generation spacecraft mission support system. This is part of a larger initiative to use open standards and open source software as building blocks for future mission and safety critical systems. JSC is hoping to leverage standardized enterprise architectures, such as Java EE, so that its internal software development efforts can be focused on the core aspects of their problem domain. This presentation will outline the design and implementation of the Trajectory system and the lessons learned during the exercise.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
The Application of Software Safety to the Constellation Program Launch Control System
NASA Technical Reports Server (NTRS)
Kania, James; Hill, Janice
2011-01-01
The application of software safety practices on the LCS project resulted in the successful implementation of the NASA Software Safety Standard NASA-STD-8719.138 and CxP software safety requirements. The GOP-GEN-GSW-011 Hazard Report was the first report developed at KSC to identify software hazard causes and their controls. This approach can be applied to similar large software - intensive systems where loss of control can lead to a hazard.
Development of a Nevada Statewide Database for Safety Analyst Software
DOT National Transportation Integrated Search
2017-02-02
Safety Analyst is a software package developed by the Federal Highway Administration (FHWA) and twenty-seven participating state and local agencies including the Nevada Department of Transportation (NDOT). The software package implemented many of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., ``Verification, Validation, Reviews, and Audits for Digital Computer Software used in Safety Systems of Nuclear... NRC regulations promoting the development of, and compliance with, software verification and...
A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Optimizing Automatic Deployment Using Non-functional Requirement Annotations
NASA Astrophysics Data System (ADS)
Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin
Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.
Security for safety critical space borne systems
NASA Technical Reports Server (NTRS)
Legrand, Sue
1987-01-01
The Space Station contains safety critical computer software components in systems that can affect life and vital property. These components require a multilevel secure system that provides dynamic access control of the data and processes involved. A study is under way to define requirements for a security model providing access control through level B3 of the Orange Book. The model will be prototyped at NASA-Johnson Space Center.
NASA Astrophysics Data System (ADS)
Schoitsch, Erwin
1988-07-01
Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.
Proceedings of the Twenty-Third Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1999-01-01
The Twenty-third Annual Software Engineering Workshop (SEW) provided 20 presentations designed to further the goals of the Software Engineering Laboratory (SEL) of the NASA-GSFC. The presentations were selected on their creativity. The sessions which were held on 2-3 of December 1998, centered on the SEL, Experimentation, Inspections, Fault Prediction, Verification and Validation, and Embedded Systems and Safety-Critical Systems.
Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)
NASA Technical Reports Server (NTRS)
Benson, Markland
2008-01-01
The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.
A Human Reliability Based Usability Evaluation Method for Safety-Critical Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillippe Palanque; Regina Bernhaupt; Ronald Boring
2006-04-01
Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less
NASA Technical Reports Server (NTRS)
Leveson, Nancy
1987-01-01
Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.
A Formal Application of Safety and Risk Assessment in Software Systems
2004-09-01
characteristics of Software Engineering, Development, and Safety...against a comparison of planned and actual schedules, costs, and characteristics . Software Safety is focused on the reduction of unsafe incidents...they merely carry out the role for which they were anatomically designed.55 Software is characteristically like an anatomical cell as it merely
Technology and Tool Development to Support Safety and Mission Assurance
NASA Technical Reports Server (NTRS)
Denney, Ewen; Pai, Ganesh
2017-01-01
The Assurance Case approach is being adopted in a number of safety-mission-critical application domains in the U.S., e.g., medical devices, defense aviation, automotive systems, and, lately, civil aviation. This paradigm refocuses traditional, process-based approaches to assurance on demonstrating explicitly stated assurance goals, emphasizing the use of structured rationale, and concrete product-based evidence as the means for providing justified confidence that systems and software are fit for purpose in safely achieving mission objectives. NASA has also been embracing assurance cases through the concepts of Risk Informed Safety Cases (RISCs), as documented in the NASA System Safety Handbook, and Objective Hierarchies (OHs) as put forth by the Agency's Office of Safety and Mission Assurance (OSMA). This talk will give an overview of the work being performed by the SGT team located at NASA Ames Research Center, in developing technologies and tools to engineer and apply assurance cases in customer projects pertaining to aviation safety. We elaborate how our Assurance Case Automation Toolset (AdvoCATE) has not only extended the state-of-the-art in assurance case research, but also demonstrated its practical utility. We have successfully developed safety assurance cases for a number of Unmanned Aircraft Systems (UAS) operations, which underwent, and passed, scrutiny both by the aviation regulator, i.e., the FAA, as well as the applicable NASA boards for airworthiness and flight safety, flight readiness, and mission readiness. We discuss our efforts in expanding AdvoCATE capabilities to support RISCs and OHs under a project recently funded by OSMA under its Software Assurance Research Program. Finally, we speculate on the applicability of our innovations beyond aviation safety to such endeavors as robotic, and human spaceflight.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... Pachard Company, Business Critical Systems, Mission Critical Business Software Division, Openvms Operating... Business Software Division, Openvms Operating System Development Group, Including an Employee Operating Out... Company, Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating System...
Deriving Safety Cases from Automatically Constructed Proofs
NASA Technical Reports Server (NTRS)
Basir, Nurlida; Denney, Ewen; Fischer, Bernd
2009-01-01
Formal proofs provide detailed justification for the validity of claims and are widely used in formal software development methods. However, they are often complex and difficult to understand, because the formalism in which they are constructed and encoded is usually machine-oriented, and they may also be based on assumptions that are not justified. This causes concerns about the trustworthiness of using formal proofs as arguments in safety-critical applications. Here, we present an approach to develop safety cases that correspond to formal proofs found by automated theorem provers and reveal the underlying argumentation structure and top-level assumptions. We concentrate on natural deduction style proofs, which are closer to human reasoning than resolution proofs, and show how to construct the safety cases by covering the natural deduction proof tree with corresponding safety case fragments. We also abstract away logical book-keeping steps, which reduces the size of the constructed safety cases. We show how the approach can be applied to the proofs found by the Muscadet prover.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Using Smart Pumps to Understand and Evaluate Clinician Practice Patterns to Ensure Patient Safety
Mansfield, Jennifer; Jarrett, Steven
2013-01-01
Background: Safety software installed on intravenous (IV) infusion pumps has been shown to positively impact the quality of patient care through avoidance of medication errors. The data derived from the use of smart pumps are often overlooked, although these data provide helpful insight into the delivery of quality patient care. Objective: The objectives of this report are to describe the value of implementing IV infusion safety software and analyzing the data and reports generated by this system. Case study: Based on experience at the Carolinas HealthCare System (CHS), executive score cards provide an aggregate view of compliance rate, number of alerts, overrides, and edits. The report of serious errors averted (ie, critical catches) supplies the location, date, and time of the critical catch, thereby enabling management to pinpoint the end-user for educational purposes. By examining the number of critical catches, a return on investment may be calculated. Assuming 3,328 of these events each year, an estimated cost avoidance would be $29,120,000 per year for CHS. Other reports allow benchmarking between institutions. Conclusion: A review of the data about medication safety across CHS has helped garner support for a medication safety officer position with the goal of ultimately creating a safer environment for the patient. PMID:24474836
The Dangers of Failure Masking in Fault-Tolerant Software: Aspects of a Recent In-Flight Upset Event
NASA Technical Reports Server (NTRS)
Johnson, C. W.; Holloway, C. M.
2007-01-01
On 1 August 2005, a Boeing Company 777-200 aircraft, operating on an international passenger flight from Australia to Malaysia, was involved in a significant upset event while flying on autopilot. The Australian Transport Safety Bureau's investigation into the event discovered that an anomaly existed in the component software hierarchy that allowed inputs from a known faulty accelerometer to be processed by the air data inertial reference unit (ADIRU) and used by the primary flight computer, autopilot and other aircraft systems. This anomaly had existed in original ADIRU software, and had not been detected in the testing and certification process for the unit. This paper describes the software aspects of the incident in detail, and suggests possible implications concerning complex, safety-critical, fault-tolerant software.
NASA Technical Reports Server (NTRS)
Fitz, Rhonda; Whitman, Gerek
2016-01-01
Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the software community. This paper discusses the findings and TR suite informing the FM domain in best practices for FM architectural design, visibility observations, and methods employed for IV&V and mission assurance.
Safety and Mission Assurance for In-House Design Lessons Learned from Ares I Upper Stage
NASA Technical Reports Server (NTRS)
Anderson, Joel M.
2011-01-01
This viewgraph presentation identifies lessons learned in the course of the Ares I Upper Stage design and in-house development effort. The contents include: 1) Constellation Organization; 2) Upper Stage Organization; 3) Presentation Structure; 4) Lesson-Importance of Systems Engineering/Integration; 5) Lesson-Importance of Early S&MA Involvement; 6) Lesson-Importance of Appropriate Staffing Levels; 7) Lesson-Importance S&MA Team Deployment; 8) Lesson-Understanding of S&MA In-Line Engineering versus Assurance; 9) Lesson-Importance of Close Coordination between Supportability and Reliability/Maintainability; 10) Lesson-Importance of Engineering Data Systems; 11) Lesson-Importance of Early Development of Supporting Databases; 12) Lesson-Importance of Coordination with Safety Assessment/Review Panels; 13) Lesson-Implementation of Software Reliability; 14) Lesson-Implementation of S&MA Technical Authority/Chief S&MA Officer; 15) Lesson-Importance of S&MA Evaluation of Project Risks; 16) Lesson-Implementation of Critical Items List and Government Mandatory Inspections; 17) Lesson-Implementation of Critical Items List Mandatory Inspections; 18) Lesson-Implementation of Test Article Safety Analysis; and 19) Lesson-Importance of Procurement Quality.
A Framework for Software Reuse in Safety-Critical System of Systems
2008-03-01
environment.8 Pressman , on the other hand, defines a software component as a unit of composition with contractually specified and explicit context...2005, p654. 9 R.S. Pressman ., Software Engineering A Practitioner’s Approach, Sixth Edition, New York, NY.: McGraw-Hill, 2005, p817. 10 W.C. Lim...index.php. 79 Pressman , R.S., Software Engineering A Practitioner’s Approach, Sixth Edition, New York, NY.: McGraw-Hill, 2005. Radio Technical
A Predictive Approach to Eliminating Errors in Software Code
NASA Technical Reports Server (NTRS)
2006-01-01
NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.
Long-term real-time structural health monitoring using wireless smart sensor
NASA Astrophysics Data System (ADS)
Jang, Shinae; Mensah-Bonsu, Priscilla O.; Li, Jingcheng; Dahal, Sushil
2013-04-01
Improving the safety and security of civil infrastructure has become a critical issue for decades since it plays a central role in the economics and politics of a modern society. Structural health monitoring of civil infrastructure using wireless smart sensor network has emerged as a promising solution recently to increase structural reliability, enhance inspection quality, and reduce maintenance costs. Though hardware and software framework are well prepared for wireless smart sensors, the long-term real-time health monitoring strategy are still not available due to the lack of systematic interface. In this paper, the Imote2 smart sensor platform is employed, and a graphical user interface for the long-term real-time structural health monitoring has been developed based on Matlab for the Imote2 platform. This computer-aided engineering platform enables the control, visualization of measured data as well as safety alarm feature based on modal property fluctuation. A new decision making strategy to check the safety is also developed and integrated in this software. Laboratory validation of the computer aided engineering platform for the Imote2 on a truss bridge and a building structure has shown the potential of the interface for long-term real-time structural health monitoring.
1992-12-01
provide program 5 managers some level of confidence that their software will operate at an acceptable level of risk. A number of structured safety...safety within the constraints of operational effectiveness, schedule, and cost through timely application of system safety management and engineering...Master of Science in Software Systems Management Peter W. Colan, B.S.E. Robert W. Prouhet, B.S. Captain, USAF Captain, USAF December 1992 Approved for
Some issues in numerical simulation of nonlinear structural response
NASA Technical Reports Server (NTRS)
Hibbitt, H. D.
1989-01-01
The development of commercial finite element software is addressed. This software provides practical tools that are used in an astonishingly wide range of engineering applications that include critical aspects of the safety evaluation of nuclear power plants or of heavily loaded offshore structures in the hostile environments of the North Sea or the Arctic, major design activities associated with the development of airframes for high strength and minimum weight, thermal analysis of electronic components, and the design of sports equipment. In the more advanced application areas, the effectiveness of the product depends critically on the quality of the mechanics and mechanics related algorithms that are implemented. Algorithmic robustness is of primary concern. Those methods that should be chosen will maximize reliability with minimal understanding on the part of the user. Computational efficiency is also important because there are always limited resources, and hence problems that are too time consuming or costly. Finally, some areas where research work will provide new methods and improvements is discussed.
NASA Technical Reports Server (NTRS)
Mango, Edward J.
2016-01-01
NASA and its industry and international partners are embarking on a bold and inspiring development effort to design and build an exploration class space system. The space system is made up of the Orion system, the Space Launch System (SLS) and the Ground Systems Development and Operations (GSDO) system. All are highly coupled together and dependent on each other for the combined safety of the space system. A key area of system safety focus needs to be in the ground and flight application software system (GFAS). In the development, certification and operations of GFAS, there are a series of safety characteristics that define the approach to ensure mission success. This paper will explore and examine the safety characteristics of the GFAS development. The GFAS system integrates the flight software packages of the Orion and SLS with the ground systems and launch countdown sequencers through the 'agile' software development process. A unique approach is needed to develop the GFAS project capabilities within this agile process. NASA has defined the software development process through a set of standards. The standards were written during the infancy of the so-called industry 'agile development' movement and must be tailored to adapt to the highly integrated environment of human exploration systems. Safety of the space systems and the eventual crew on board is paramount during the preparation of the exploration flight systems. A series of software safety characteristics have been incorporated into the development and certification efforts to ensure readiness for use and compatibility with the space systems. Three underlining factors in the exploration architecture require the GFAS system to be unique in its approach to ensure safety for the space systems, both the flight as well as the ground systems. The first are the missions themselves, which are exploration in nature, and go far beyond the comfort of low Earth orbit operations. The second is the current exploration system will launch only one mission per year even less during its developmental phases. Finally, the third is the partnered approach through the use of many different prime contractors, including commercial and international partners, to design and build the exploration systems. These three factors make the challenges to meet the mission preparations and the safety expectations extremely difficult to implement. As NASA leads a team of partners in the exploration beyond earth's influence, it is a safety imperative that the application software used to test, checkout, prepare and launch the exploration systems put safety of the hardware and mission first. Software safety characteristics are built into the design and development process to enable the human rated systems to begin their missions safely and successfully. Exploration missions beyond Earth are inherently risky, however, with solid safety approaches in both hardware and software, the boldness of these missions can be realized for all on the home planet.
Automated Source-Code-Based Testing of Object-Oriented Software
NASA Astrophysics Data System (ADS)
Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten
2014-08-01
With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.
NASA Astrophysics Data System (ADS)
1992-06-01
The House Committee on Science, Space, and Technology asked NASA to study software development issues for the space station. How well NASA has implemented key software engineering practices for the station was asked. Specifically, the objectives were to determine: (1) if independent verification and validation techniques are being used to ensure that critical software meets specified requirements and functions; (2) if NASA has incorporated software risk management techniques into program; (3) whether standards are in place that will prescribe a disciplined, uniform approach to software development; and (4) if software support tools will help, as intended, to maximize efficiency in developing and maintaining the software. To meet the objectives, NASA proceeded: (1) reviewing and analyzing software development objectives and strategies contained in NASA conference publications; (2) reviewing and analyzing NASA, other government, and industry guidelines for establishing good software development practices; (3) reviewing and analyzing technical proposals and contracts; (4) reviewing and analyzing software management plans, risk management plans, and program requirements; (4) reviewing and analyzing reports prepared by NASA and contractor officials that identified key issues and challenges facing the program; (5) obtaining expert opinions on what constitutes appropriate independent V-and-V and software risk management activities; (6) interviewing program officials at NASA headquarters in Washington, DC; at the Space Station Program Office in Reston, Virginia; and at the three work package centers; Johnson in Houston, Texas; Marshall in Huntsville, Alabama; and Lewis in Cleveland, Ohio; and (7) interviewing contractor officials doing work for NASA at Johnson and Marshall. The audit work was performed in accordance with generally accepted government auditing standards, between April 1991 and May 1992.
Toward a Formal Evaluation of Refactorings
NASA Technical Reports Server (NTRS)
Paul, John; Kuzmina, Nadya; Gamboa, Ruben; Caldwell, James
2008-01-01
Refactoring is a software development strategy that characteristically alters the syntactic structure of a program without changing its external behavior [2]. In this talk we present a methodology for extracting formal models from programs in order to evaluate how incremental refactorings affect the verifiability of their structural specifications. We envision that this same technique may be applicable to other types of properties such as those that concern the design and maintenance of safety-critical systems.
Montella, Alfonso; Chiaradonna, Salvatore; Criscuolo, Giorgio; De Martino, Salvatore
2017-02-05
First step of the development of an effective safety management system is to create reliable crash databases since the quality of decision making in road safety depends on the quality of the data on which decisions are based. Improving crash data is a worldwide priority, as highlighted in the Global Plan for the Decade of Action for Road Safety adopted by the United Nations, which recognizes that the overall goal of the plan will be attained improving the quality of data collection at the national, regional and global levels. Crash databases provide the basic information for effective highway safety efforts at any level of government, but lack of uniformity among countries and among the different jurisdictions in the same country is observed. Several existing databases show significant drawbacks which hinder their effective use for safety analysis and improvement. Furthermore, modern technologies offer great potential for significant improvements of existing methods and procedures for crash data collection, processing and analysis. To address these issues, in this paper we present the development and evaluation of a web-based platform-independent software for crash data collection, processing and analysis. The software is designed for mobile and desktop electronic devices and enables a guided and automated drafting of the crash report, assisting police officers both on-site and in the office. The software development was based both on the detailed critical review of existing Australasian, EU, and U.S. crash databases and software as well as on the continuous consultation with the stakeholders. The evaluation was carried out comparing the completeness, timeliness, and accuracy of crash data before and after the use of the software in the city of Vico Equense, in south of Italy showing significant advantages. The amount of collected information increased from 82 variables to 268 variables, i.e., a 227% increase. The time saving was more than one hour per crash, i.e., a 36% reduction. The on-site data collection did not produce time saving, however this is a temporary weakness that will be annihilated very soon in the future after officers are more acquainted with the software. The phase of evaluation, processing and analysis carried out in the office was dramatically shortened, i.e., a 69% reduction. Another benefit was the standardization which allowed fast and consistent data analysis and evaluation. Even if all these benefits are remarkable, the most valuable benefit of the new procedure was the reduction of the police officers mistakes during the manual operations of survey and data evaluation. Because of these benefits, the satisfaction questionnaires administrated to the police officers after the testing phase showed very good acceptance of the procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wedeking, Gregory A.; Zierer, Joseph J.; Jackson, John R.
2010-07-01
The University of Texas, Center for Electromechanics (UT-CEM) is making a major upgrade to the robotic tracking system on the Hobby Eberly Telescope (HET) as part of theWide Field Upgrade (WFU). The upgrade focuses on a seven-fold increase in payload and necessitated a complete redesign of all tracker supporting structure and motion control systems, including the tracker bridge, ten drive systems, carriage frames, a hexapod, and many other subsystems. The cost and sensitivity of the scientific payload, coupled with the tracker system mass increase, necessitated major upgrades to personnel and hardware safety systems. To optimize kinematic design of the entire tracker, UT-CEM developed novel uses of constraints and drivers to interface with a commercially available CAD package (SolidWorks). For example, to optimize volume usage and minimize obscuration, the CAD software was exercised to accurately determine tracker/hexapod operational space needed to meet science requirements. To verify hexapod controller models, actuator travel requirements were graphically measured and compared to well defined equations of motion for Stewart platforms. To ensure critical hardware safety during various failure modes, UT-CEM engineers developed Visual Basic drivers to interface with the CAD software and quickly tabulate distance measurements between critical pieces of optical hardware and adjacent components for thousands of possible hexapod configurations. These advances and techniques, applicable to any challenging robotic system design, are documented and describe new ways to use commercially available software tools to more clearly define hardware requirements and help insure safe operation.
Bureaucracy, Safety and Software: a Potentially Lethal Cocktail
NASA Astrophysics Data System (ADS)
Hatton, Les
This position paper identifies a potential problem with the evolution of software controlled safety critical systems. It observes that the rapid growth of bureaucracy in society quickly spills over into rules for behaviour. Whether the need for the rules comes first or there is simple anticipation of the need for a rule by a bureaucrat is unclear in many cases. Many such rules lead to draconian restrictions and often make the existing situation worse due to the presence of unintended consequences as will be shown with a number of examples.
Instrument control software development process for the multi-star AO system ARGOS
NASA Astrophysics Data System (ADS)
Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.
2012-09-01
The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.
2015-05-01
quality attributes. Prioritization of the utility tree leafs driven by mission goals help the user ensure that critical requirements are well-specified...Methods: State of the Art and Future Directions”, ACM Computing Surveys. 1996. 10 Laitenberger, Oliver , “A Survey of Software Inspection Technologies, Handbook on Software Engineering and Knowledge Engineering”. 2002.
Static and Dynamic Verification of Critical Software for Space Applications
NASA Astrophysics Data System (ADS)
Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.
Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA, and the Xception tool for fault-injection. Keywords: Verification &Validation, RAMS, Onboard software, SFMEA, STA, Fault-injection 1 This work is being performed under the project STADY Applied Static And Dynamic Verification Of Critical Software, ESA/ESTEC Contract Nr. 15751/02/NL/LvH.
Space Flight Software Development Software for Intelligent System Health Management
NASA Technical Reports Server (NTRS)
Trevino, Luis C.; Crumbley, Tim
2004-01-01
The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.
2002-07-01
Knowledge From Data .................................................. 25 HIGH-CONFIDENCE SOFTWARE AND SYSTEMS Reliability, Security, and Safety for...NOAA’s Cessna Citation flew over the 16-acre World Trade Center site, scanning with an Optech ALSM unit. The system recorded data points from 33,000...provide the data storage and compute power for intelligence analysis, high-performance national defense systems , and critical scientific research • Large
Maintaining the Health of Software Monitors
NASA Technical Reports Server (NTRS)
Person, Suzette; Rungta, Neha
2013-01-01
Software health management (SWHM) techniques complement the rigorous verification and validation processes that are applied to safety-critical systems prior to their deployment. These techniques are used to monitor deployed software in its execution environment, serving as the last line of defense against the effects of a critical fault. SWHM monitors use information from the specification and implementation of the monitored software to detect violations, predict possible failures, and help the system recover from faults. Changes to the monitored software, such as adding new functionality or fixing defects, therefore, have the potential to impact the correctness of both the monitored software and the SWHM monitor. In this work, we describe how the results of a software change impact analysis technique, Directed Incremental Symbolic Execution (DiSE), can be applied to monitored software to identify the potential impact of the changes on the SWHM monitor software. The results of DiSE can then be used by other analysis techniques, e.g., testing, debugging, to help preserve and improve the integrity of the SWHM monitor as the monitored software evolves.
Remote Diagnosis of the International Space Station Utilizing Telemetry Data
NASA Technical Reports Server (NTRS)
Deb, Somnath; Ghoshal, Sudipto; Malepati, Venkat; Domagala, Chuck; Patterson-Hine, Ann; Alena, Richard; Norvig, Peter (Technical Monitor)
2000-01-01
Modern systems such as fly-by-wire aircraft, nuclear power plants, manufacturing facilities, battlefields, etc., are all examples of highly connected network enabled systems. Many of these systems are also mission critical and need to be monitored round the clock. Such systems typically consist of embedded sensors in networked subsystems that can transmit data to central (or remote) monitoring stations. Moreover, many legacy are safety systems were originally not designed for real-time onboard diagnosis, but a critical and would benefit from such a solution. Embedding additional software or hardware in such systems is often considered too intrusive and introduces flight safety and validation concerns. Such systems can be equipped to transmit the sensor data to a remote-processing center for continuous health monitoring. At Qualtech Systems, we are developing a Remote Diagnosis Server (RDS) that can support multiple simultaneous diagnostic sessions from a variety of remote subsystems.
Health IT for Patient Safety and Improving the Safety of Health IT.
Magrabi, Farah; Ong, Mei-Sing; Coiera, Enrico
2016-01-01
Alongside their benefits health IT applications can pose new risks to patient safety. Problems with IT have been linked to many different types of clinical errors including prescribing and administration of medications; as well as wrong-patient, wrong-site errors, and delays in procedures. There is also growing concern about the risks of data breach and cyber-security. IT-related clinical errors have their origins in processes undertaken to design, build, implement and use software systems in a broader sociotechnical context. Safety can be improved with greater standardization of clinical software and by improving the quality of processes at different points in the technology life cycle, spanning design, build, implementation and use in clinical settings. Oversight processes can be set up at a regional or national level to ensure that clinical software systems meet specific standards. Certification and regulation are two mechanisms to improve oversight. In the absence of clear standards, guidelines are useful to promote safe design and implementation practices. Processes to identify and mitigate hazards can be formalised via a safety management system. Minimizing new patient safety risks is critical to realizing the benefits of IT.
Natural Language Interface for Safety Certification of Safety-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2011-01-01
Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.
Towards Measurement of Confidence in Safety Cases
NASA Technical Reports Server (NTRS)
Denney, Ewen; Paim Ganesh J.; Habli, Ibrahim
2011-01-01
Arguments in safety cases are predominantly qualitative. This is partly attributed to the lack of sufficient design and operational data necessary to measure the achievement of high-dependability targets, particularly for safety-critical functions implemented in software. The subjective nature of many forms of evidence, such as expert judgment and process maturity, also contributes to the overwhelming dependence on qualitative arguments. However, where data for quantitative measurements is systematically collected, quantitative arguments provide far more benefits over qualitative arguments, in assessing confidence in the safety case. In this paper, we propose a basis for developing and evaluating integrated qualitative and quantitative safety arguments based on the Goal Structuring Notation (GSN) and Bayesian Networks (BN). The approach we propose identifies structures within GSN-based arguments where uncertainties can be quantified. BN are then used to provide a means to reason about confidence in a probabilistic way. We illustrate our approach using a fragment of a safety case for an unmanned aerial system and conclude with some preliminary observations
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
The Role and Quality of Software Safety in the NASA Constellation Program
NASA Technical Reports Server (NTRS)
Layman, Lucas; Basili, Victor R.; Zelkowitz, Marvin V.
2010-01-01
In this study, we examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Obtaining an accurate, program-wide picture of software safety risk is difficult across multiple, independently-developing systems. We leverage one source of safety information, hazard analysis, to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. The goal of this research is two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to quantify the level of risk presented by software in the hazard analysis. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. To quantify the importance of software, we collected metrics based on the number of software-related causes and controls of hazardous conditions. To quantify the level of risk presented by software, we created a metric scheme to measure the specificity of these software causes. We found that from 49-70% of hazardous conditions in the three systems could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. Furthermore, 10-12% of all controls were software-based. There is potential for inaccuracy in these counts, however, as software causes are not consistently scoped, and the presence of software in a cause or control is not always clear. The application of our software specificity metrics also identified risks in the hazard reporting process. In particular, we found a number of traceability risks in the hazard reports may impede verification of software and system safety.
Ronquillo, Jay G; Zuckerman, Diana M
2017-09-01
Policy Points: Medical software has become an increasingly critical component of health care, yet the regulation of these devices is inconsistent and controversial. No studies of medical devices and software assess the impact on patient safety of the FDA's current regulatory safeguards and new legislative changes to those standards. Our analysis quantifies the impact of software problems in regulated medical devices and indicates that current regulations are necessary but not sufficient for ensuring patient safety by identifying and eliminating dangerous defects in software currently on the market. New legislative changes will further deregulate health IT, reducing safeguards that facilitate the reporting and timely recall of flawed medical software that could harm patients. Medical software has become an increasingly critical component of health care, yet the regulatory landscape for digital health is inconsistent and controversial. To understand which policies might best protect patients, we examined the impact of the US Food and Drug Administration's (FDA's) regulatory safeguards on software-related technologies in recent years and the implications for newly passed legislative changes in regulatory policy. Using FDA databases, we identified all medical devices that were recalled from 2011 through 2015 primarily because of software defects. We counted all software-related recalls for each FDA risk category and evaluated each high-risk and moderate-risk recall of electronic medical records to determine the manufacturer, device classification, submission type, number of units, and product details. A total of 627 software devices (1.4 million units) were subject to recalls, with 12 of these devices (190,596 units) subject to the highest-risk recalls. Eleven of the devices recalled as high risk had entered the market through the FDA review process that does not require evidence of safety or effectiveness, and one device was completely exempt from regulatory review. The largest high-risk recall categories were anesthesiology and general hospital, with one each in cardiovascular and neurology. Five electronic medical record systems (9,347 units) were recalled for software defects classified as posing a moderate risk to patient safety. Software problems in medical devices are not rare and have the potential to negatively influence medical care. Premarket regulation has not captured all the software issues that could harm patients, evidenced by the potentially large number of patients exposed to software products later subject to high-risk and moderate-risk recalls. Provisions of the 21st Century Cures Act that became law in late 2016 will reduce safeguards further. Absent stronger regulations and implementation to create robust risk assessment and adverse event reporting, physicians and their patients are likely to be at risk from medical errors caused by software-related problems in medical devices. © 2017 Milbank Memorial Fund.
Using argument notation to engineer biological simulations with increased confidence
Alden, Kieran; Andrews, Paul S.; Polack, Fiona A. C.; Veiga-Fernandes, Henrique; Coles, Mark C.; Timmis, Jon
2015-01-01
The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions. PMID:25589574
Using argument notation to engineer biological simulations with increased confidence.
Alden, Kieran; Andrews, Paul S; Polack, Fiona A C; Veiga-Fernandes, Henrique; Coles, Mark C; Timmis, Jon
2015-03-06
The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions.
Modeling and Hazard Analysis Using STPA
NASA Astrophysics Data System (ADS)
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.
ERIC Educational Resources Information Center
Anderson, Tiffoni
This module provides information on development and use of a Material Safety Data Sheet (MSDS) software program that seeks to link literacy skills education, safety training, and human-centered design. Section 1 discusses the development of the software program that helps workers understand the MSDSs that accompany the chemicals with which they…
Generic Safety Requirements for Developing Safe Insulin Pump Software
Zhang, Yi; Jetley, Raoul; Jones, Paul L; Ray, Arnab
2011-01-01
Background The authors previously introduced a highly abstract generic insulin infusion pump (GIIP) model that identified common features and hazards shared by most insulin pumps on the market. The aim of this article is to extend our previous work on the GIIP model by articulating safety requirements that address the identified GIIP hazards. These safety requirements can be validated by manufacturers, and may ultimately serve as a safety reference for insulin pump software. Together, these two publications can serve as a basis for discussing insulin pump safety in the diabetes community. Methods In our previous work, we established a generic insulin pump architecture that abstracts functions common to many insulin pumps currently on the market and near-future pump designs. We then carried out a preliminary hazard analysis based on this architecture that included consultations with many domain experts. Further consultation with domain experts resulted in the safety requirements used in the modeling work presented in this article. Results Generic safety requirements for the GIIP model are presented, as appropriate, in parameterized format to accommodate clinical practices or specific insulin pump criteria important to safe device performance. Conclusions We believe that there is considerable value in having the diabetes, academic, and manufacturing communities consider and discuss these generic safety requirements. We hope that the communities will extend and revise them, make them more representative and comprehensive, experiment with them, and use them as a means for assessing the safety of insulin pump software designs. One potential use of these requirements is to integrate them into model-based engineering (MBE) software development methods. We believe, based on our experiences, that implementing safety requirements using MBE methods holds promise in reducing design/implementation flaws in insulin pump development and evolutionary processes, therefore improving overall safety of insulin pump software. PMID:22226258
Healthcare software assurance.
Cooper, Jason G; Pauley, Keith A
2006-01-01
Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA's software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted.
Cooper, Jason G.; Pauley, Keith A.
2006-01-01
Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324
Global Precipitation Measurement (GPM) Safety Inhibit Timeline Tool
NASA Technical Reports Server (NTRS)
Dion, Shirley
2012-01-01
The Global Precipitation Measurement (GPM) Observatory is a joint mission under the partnership by National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA), Japan. The NASA Goddard Space Flight Center (GSFC) has the lead management responsibility for NASA on GPM. The GPM program will measure precipitation on a global basis with sufficient quality, Earth coverage, and sampling to improve prediction of the Earth's climate, weather, and specific components of the global water cycle. As part of the development process, NASA built the spacecraft (built in-house at GSFC) and provided one instrument (GPM Microwave Imager (GMI) developed by Ball Aerospace) JAXA provided the launch vehicle (H2-A by MHI) and provided one instrument (Dual-Frequency Precipitation Radar (DPR) developed by NTSpace). Each instrument developer provided a safety assessment which was incorporated into the NASA GPM Safety Hazard Assessment. Inhibit design was reviewed for hazardous subsystems which included the High Gain Antenna System (HGAS) deployment, solar array deployment, transmitter turn on, propulsion system release, GMI deployment, and DPR radar turn on. The safety inhibits for these listed hazards are controlled by software. GPM developed a "pathfinder" approach for reviewing software that controls the electrical inhibits. This is one of the first GSFC in-house programs that extensively used software controls. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As part of this process a new tool "safety inhibit time line" was created for management of inhibits and their controls during spacecraft buildup and testing during 1& Tat GSFC and at the Range in Japan. In addition to understanding inhibits and controls during 1& T the tool allows the safety analyst to better communicate with others the changes in inhibit states with each phase of hardware and software testing. The tool was very useful for communicating compliance with safety requirements especially when working with a foreign partner.
Generating Safety-Critical PLC Code From a High-Level Application Software Specification
NASA Technical Reports Server (NTRS)
2008-01-01
The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.
Designing the modern pump: engineering aspects of continuous subcutaneous insulin infusion software.
Welsh, John B; Vargas, Steven; Williams, Gary; Moberg, Sheldon
2010-06-01
Insulin delivery systems attracted the efforts of biological, mechanical, electrical, and software engineers well before they were commercially viable. The introduction of the first commercial insulin pump in 1983 represents an enduring milestone in the history of diabetes management. Since then, pumps have become much more than motorized syringes and have assumed a central role in diabetes management by housing data on insulin delivery and glucose readings, assisting in bolus estimation, and interfacing smoothly with humans and compatible devices. Ensuring the integrity of the embedded software that controls these devices is critical to patient safety and regulatory compliance. As pumps and related devices evolve, software engineers will face challenges and opportunities in designing pumps that are safe, reliable, and feature-rich. The pumps and related systems must also satisfy end users, healthcare providers, and regulatory authorities. In particular, pumps that are combined with glucose sensors and appropriate algorithms will provide the basis for increasingly safe and precise automated insulin delivery-essential steps to developing a fully closed-loop system.
The Legacy of Space Shuttle Flight Software
NASA Technical Reports Server (NTRS)
Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.
2011-01-01
The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.
An Analysis of Mission Critical Computer Software in Naval Aviation
1991-03-01
No. Task No. Work Unit Accesion Number 11. TITLE (Include Security Classification) AN ANALYSIS OF MISSION CRITICAL COMPUTER SOFTWARE IN NAVAL AVIATION...software development schedules were sustained without a milestone change being made. Also, software that was released to the fleet had no major...fleet contain any major defects? This research has revealed that only about half of the original software development schedules were sustained without a
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes Used in... revised regulatory guide (RG), revision 1 of RG 1.173, ``Developing Software Life Cycle Processes for... Developing a Software Project Life Cycle Process,'' issued 2006, with the clarifications and exceptions as...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... Software Developers on the Technical Specifications for Common Formats for Patient Safety Data Collection... software developers can provide input on these technical specifications for the Common Formats Version 1.1... specifications, which provide direction to software developers that plan to implement the Common Formats...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... Software Developers on the Technical Specifications for Common Formats for Patient Safety Data Collection... designed as an interactive forum where PSOs and software developers can provide input on these technical... updated event descriptions, forms, and technical specifications for software developers. As an update to...
A Quantitative Study of Global Software Development Teams, Requirements, and Software Projects
ERIC Educational Resources Information Center
Parker, Linda L.
2016-01-01
The study explored the relationship between global software development teams, effective software requirements, and stakeholders' perception of successful software development projects within the field of information technology management. It examined the critical relationship between Global Software Development (GSD) teams creating effective…
Software Design Improvements. Part 1; Software Benefits and Limitations
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?
Verification and Validation Challenges for Adaptive Flight Control of Complex Autonomous Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2018-01-01
Autonomy of aerospace systems requires the ability for flight control systems to be able to adapt to complex uncertain dynamic environment. In spite of the five decades of research in adaptive control, the fact still remains that currently no adaptive control system has ever been deployed on any safety-critical or human-rated production systems such as passenger transport aircraft. The problem lies in the difficulty with the certification of adaptive control systems since existing certification methods cannot readily be used for nonlinear adaptive control systems. Research to address the notion of metrics for adaptive control began to appear in the recent years. These metrics, if accepted, could pave a path towards certification that would potentially lead to the adoption of adaptive control as a future control technology for safety-critical and human-rated production systems. Development of certifiable adaptive control systems represents a major challenge to overcome. Adaptive control systems with learning algorithms will never become part of the future unless it can be proven that they are highly safe and reliable. Rigorous methods for adaptive control software verification and validation must therefore be developed to ensure that adaptive control system software failures will not occur, to verify that the adaptive control system functions as required, to eliminate unintended functionality, and to demonstrate that certification requirements imposed by regulatory bodies such as the Federal Aviation Administration (FAA) can be satisfied. This presentation will discuss some of the technical issues with adaptive flight control and related V&V challenges.
Software IV and V Research Priorities and Applied Program Accomplishments Within NASA
NASA Technical Reports Server (NTRS)
Blazy, Louis J.
2000-01-01
The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering
Software Innovation in a Mission Critical Environment
NASA Technical Reports Server (NTRS)
Fredrickson, Steven
2015-01-01
Operating in mission-critical environments requires trusted solutions, and the preference for "tried and true" approaches presents a potential barrier to infusing innovation into mission-critical systems. This presentation explores opportunities to overcome this barrier in the software domain. It outlines specific areas of innovation in software development achieved by the Johnson Space Center (JSC) Engineering Directorate in support of NASA's major human spaceflight programs, including International Space Station, Multi-Purpose Crew Vehicle (Orion), and Commercial Crew Programs. Software engineering teams at JSC work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements for genuinely mission critical applications. The innovations described, including the use of NASA Core Flight Software and its associated software tool chain, can lead to software that is more affordable, more reliable, better modelled, more flexible, more easily maintained, better tested, and enabling of automation.
Verification and Validation for Flight-Critical Systems (VVFCS)
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Jacobsen, Robert A.
2010-01-01
On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).
Mission and Safety Critical (MASC) plans for the MASC Kernel simulation
NASA Technical Reports Server (NTRS)
1991-01-01
This report discusses a prototype for Mission and Safety Critical (MASC) kernel simulation which explains the intended approach and how the simulation will be used. Smalltalk is chosen for the simulation because of usefulness in quickly building working models of the systems and its object-oriented approach to software. A scenario is also introduced to give details about how the simulation works. The eventual system will be a fully object-oriented one implemented in Ada via Dragoon. To implement the simulation, a scenario using elements typical of those in the Space Station, was created.
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
Automated High-Speed Video Detection of Small-Scale Explosives Testing
NASA Astrophysics Data System (ADS)
Ford, Robert; Guymon, Clint
2013-06-01
Small-scale explosives sensitivity test data is used to evaluate hazards of processing, handling, transportation, and storage of energetic materials. Accurate test data is critical to implementation of engineering and administrative controls for personnel safety and asset protection. Operator mischaracterization of reactions during testing contributes to either excessive or inadequate safety protocols. Use of equipment and associated algorithms to aid the operator in reaction determination can significantly reduce operator error. Safety Management Services, Inc. has developed an algorithm to evaluate high-speed video images of sparks from an ESD (Electrostatic Discharge) machine to automatically determine whether or not a reaction has taken place. The algorithm with the high-speed camera is termed GoDetect (patent pending). An operator assisted version for friction and impact testing has also been developed where software is used to quickly process and store video of sensitivity testing. We have used this method for sensitivity testing with multiple pieces of equipment. We present the fundamentals of GoDetect and compare it to other methods used for reaction detection.
Safety Characteristics in System Application Software for Human Rated Exploration
NASA Technical Reports Server (NTRS)
Mango, E. J.
2016-01-01
NASA and its industry and international partners are embarking on a bold and inspiring development effort to design and build an exploration class space system. The space system is made up of the Orion system, the Space Launch System (SLS) and the Ground Systems Development and Operations (GSDO) system. All are highly coupled together and dependent on each other for the combined safety of the space system. A key area of system safety focus needs to be in the ground and flight application software system (GFAS). In the development, certification and operations of GFAS, there are a series of safety characteristics that define the approach to ensure mission success. This paper will explore and examine the safety characteristics of the GFAS development.
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
NASA Astrophysics Data System (ADS)
Silva, N.; Esper, A.
2012-01-01
The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.
SSME digital control design characteristics
NASA Technical Reports Server (NTRS)
Mitchell, W. T.; Searle, R. F.
1985-01-01
To protect against a latent programming error (software fault) existing in an untried branch combination that would render the space shuttle out of control in a critical flight phase, the Backup Flight System (BFS) was chartered to provide a safety alternative. The BFS is designed to operate in critical flight phases (ascent and descent) by monitoring the activities of the space shuttle flight subsystems that are under control of the primary flight software (PFS) (e.g., navigation, crew interface, propulsion), then, upon manual command by the flightcrew, to assume control of the space shuttle and deliver it to a noncritical flight condition (safe orbit or touchdown). The problems associated with the selection of the PFS/BFS system architecture, the internal BFS architecture, the fault tolerant software mechanisms, and the long term BFS utility are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Pointer, William David; Sieger, Matt
2016-04-01
The goal of this review is to enable application of codes or software packages for safety assessment of advanced sodium-cooled fast reactor (SFR) designs. To address near-term programmatic needs, the authors have focused on two objectives. First, the authors have focused on identification of requirements for software QA that must be satisfied to enable the application of software to future safety analyses. Second, the authors have collected best practices applied by other code development teams to minimize cost and time of initial code qualification activities and to recommend a path to the stated goal.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
2013-09-01
to a XML file, a code that Bonine in [21] developed for a similar purpose. Using the StateRover XML log file import tool, we are able to generate a...C. Bonine , M. Shing, T.W. Otani, “Computer-aided process and tools for mobile software acquisition,” NPS, Monterey, CA, Tech. Rep. NPS-SE-13...C10P07R05– 075, 2013. [21] C. Bonine , “Specification, validation and verification of mobile application behavior,” M.S. thesis, Dept. Comp. Science, NPS
Argonne News Brief: First Sensors for Chicago’s “City Fitness Tracker”
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This week in Chicago, the Array of Things team installed its first 50 nodes on city streets to kick off the groundbreaking urban sensing project. Array of Things is designed as a “fitness tracker” for the city, collecting new streams of data on Chicago’s environment, infrastructure, and activity. This hyper-local, open data can help researchers, city officials, and software developers study and address critical city challenges, such as preventing urban flooding, improving traffic safety and air quality, and assessing the nature and impact of climate change.
NASA Astrophysics Data System (ADS)
Kostarev, S. N.; Sereda, T. G.
2017-10-01
The article is concerned with the problem of transmitting data from telemetric devices in order to provide automated systems for the electric drive control of oil-extracting equipment. The paper given discusses the possibility to use a logging cable as means of signal transfer. Simulation models of signaling and relay-contact circuits for monitoring critical drive parameters are under discussion. The authors suggest applying the operator ⊕ (excluding OR) to increase anti-jamming effects and to get a more reliable noise filter.
Blagec, Kathrin; Jungwirth, David; Haluza, Daniela; Samwald, Matthias
2018-01-01
Medical device regulations which aim to ensure safety standards do not only apply to hardware devices but also to standalone medical software, e.g. mobile apps. To explore the effects of these regulations on the development and distribution of medical standalone software. We invited a convenience sample of 130 domain experts to participate in an online survey about the impact of current regulations on the development and distribution of medical standalone software. 21 respondents completed the questionnaire. Participants reported slight positive effects on usability, reliability, and data security of their products, whereas the ability to modify already deployed software and customization by end users were negatively impacted. The additional time and costs needed to go through the regulatory process were perceived as the greatest obstacles in developing and distributing medical software. Further research is needed to compare positive effects on software quality with negative impacts on market access and innovation. Strategies for avoiding over-regulation while still ensuring safety standards need to be devised.
DOT National Transportation Integrated Search
2013-04-01
Columns are considered the most critical elements in structures. The unconfined analysis for columns is well established in the literature. Structural design codes dictate reduction factors for safety. It wasnt until very recently that design spec...
Models Extracted from Text for System-Software Safety Analyses
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2010-01-01
This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.
DOT National Transportation Integrated Search
2009-01-01
This booklet provides an overview of SafetyAnalyst. SafetyAnalyst is a set of software tools under development to help State and local highway agencies advance their programming of site-specific safety improvements. SafetyAnalyst will incorporate sta...
Integrated Software Health Management for Aircraft GN and C
NASA Technical Reports Server (NTRS)
Schumann, Johann; Mengshoel, Ole
2011-01-01
Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.
NASA Technical Reports Server (NTRS)
Skoog, Mark A.
2016-01-01
NASAs Armstrong Flight Research Center has been engaged in the development of highly automatic safety systems for aviation since the mid 80s. For the past three years under Seedling and Center Innovation funding this work has moved toward the development of a software architecture applicable to autonomous safety. This work is now broadening and accelerating to address the airworthiness issues surrounding making a case for trustworthy autonomy. This software architecture is called the expandable variable-autonomy architecture (EVAA) and utilizes a run-time assurance approach to safety assurance.
Integrated Systems Health Management for Space Exploration
NASA Technical Reports Server (NTRS)
Uckun, Serdar
2005-01-01
Integrated Systems Health Management (ISHM) is a system engineering discipline that addresses the design, development, operation, and lifecycle management of components, subsystems, vehicles, and other operational systems with the purpose of maintaining nominal system behavior and function and assuring mission safety and effectiveness under off-nominal conditions. NASA missions are often conducted in extreme, unfamiliar environments of space, using unique experimental spacecraft. In these environments, off-nominal conditions can develop with the potential to rapidly escalate into mission- or life-threatening situations. Further, the high visibility of NASA missions means they are always characterized by extraordinary attention to safety. ISHM is a critical element of risk mitigation, mission safety, and mission assurance for exploration. ISHM enables: In-space maintenance and repair; a) Autonomous (and automated) launch abort and crew escape capability; b) Efficient testing and checkout of ground and flight systems; c) Monitoring and trending of ground and flight system operations and performance; d) Enhanced situational awareness and control for ground personnel and crew; e) Vehicle autonomy (self-sufficiency) in responding to off-nominal conditions during long-duration and distant exploration missions; f) In-space maintenance and repair; and g) Efficient ground processing of reusable systems. ISHM concepts and technologies may be applied to any complex engineered system such as transportation systems, orbital or planetary habitats, observatories, command and control systems, life support systems, safety-critical software, and even the health of flight crews. As an overarching design and operational principle implemented at the system-of-systems level, ISHM holds substantial promise in terms of affordability, safety, reliability, and effectiveness of space exploration missions.
Considering Object Oriented Technology in Aviation Applications
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Holloway, C. Michael
2003-01-01
Few developers of commercial aviation software products are using object-oriented technology (OOT), despite its popularity in some other industries. Safety concerns about using OOT in critical applications, uncertainty about how to comply with regulatory requirements, and basic conservatism within the aviation community have been factors behind this caution. The Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) have sponsored research to investigate and workshops to discuss safety and certification concerns about OOT and to develop recommendations for safe use. Two Object Oriented Technology in Aviation (OOTiA) workshops have been held and numerous issues and comments about the effect of OOT features and languages have been collected. This paper gives a high level overview of the OOTiA project, and discusses selected specific results from the March 2003 workshop. In particular, results in the form of questions to consider before making the decision to use OOT are presented.
Verification and Validation of Flight-Critical Systems
NASA Technical Reports Server (NTRS)
Brat, Guillaume
2010-01-01
For the first time in many years, the NASA budget presented to congress calls for a focused effort on the verification and validation (V&V) of complex systems. This is mostly motivated by the results of the VVFCS (V&V of Flight-Critical Systems) study, which should materialize as a a concrete effort under the Aviation Safety program. This talk will present the results of the study, from requirements coming out of discussions with the FAA and the Joint Planning and Development Office (JPDO) to technical plan addressing the issue, and its proposed current and future V&V research agenda, which will be addressed by NASA Ames, Langley, and Dryden as well as external partners through NASA Research Announcements (NRA) calls. This agenda calls for pushing V&V earlier in the life cycle and take advantage of formal methods to increase safety and reduce cost of V&V. I will present the on-going research work (especially the four main technical areas: Safety Assurance, Distributed Systems, Authority and Autonomy, and Software-Intensive Systems), possible extensions, and how VVFCS plans on grounding the research in realistic examples, including an intended V&V test-bench based on an Integrated Modular Avionics (IMA) architecture and hosted by Dryden.
Requirements for a multifunctional code architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiihonen, O.; Juslin, K.
1997-07-01
The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results aremore » managed.« less
The Department of Energy Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Felty, James R.
2005-05-01
This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.
SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salomons, G; Kelly, D
Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes thatmore » the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.« less
NASA Software Assurance's Roles in Research and Technology
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2010-01-01
This slide presentation reviews the interactions between the scientist and engineers doing research and technology and the software developers and others who are doing software assurance. There is a discussion of the role of the Safety and Mission Assurance (SMA) in developing software to be used for research and technology, and the importance of this role as the technology moves to the higher levels of the technology readiness levels (TRLs). There is also a call to change the way the development of software is developed.
Formal verification of software-based medical devices considering medical guidelines.
Daw, Zamira; Cleaveland, Rance; Vetter, Marcus
2014-01-01
Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.
Virginio, Luiz A; Ricarte, Ivan Luiz Marques
2015-01-01
Although Electronic Health Records (EHR) can offer benefits to the health care process, there is a growing body of evidence that these systems can also incur risks to patient safety when developed or used improperly. This work is a literature review to identify these risks from a software quality perspective. Therefore, the risks were classified based on the ISO/IEC 25010 software quality model. The risks identified were related mainly to the characteristics of "functional suitability" (i.e., software bugs) and "usability" (i.e., interface prone to user error). This work elucidates the fact that EHR quality problems can adversely affect patient safety, resulting in errors such as incorrect patient identification, incorrect calculation of medication dosages, and lack of access to patient data. Therefore, the risks presented here provide the basis for developers and EHR regulating bodies to pay attention to the quality aspects of these systems that can result in patient harm.
Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults
NASA Technical Reports Server (NTRS)
Hamill, Maggie; Goseva-Popstojanova, Katerina
2016-01-01
Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.
NASA's Aviation Safety and Modeling Project
NASA Technical Reports Server (NTRS)
Chidester, Thomas R.; Statler, Irving C.
2006-01-01
The Aviation Safety Monitoring and Modeling (ASMM) Project of NASA's Aviation Safety program is cultivating sources of data and developing automated computer hardware and software to facilitate efficient, comprehensive, and accurate analyses of the data collected from large, heterogeneous databases throughout the national aviation system. The ASMM addresses the need to provide means for increasing safety by enabling the identification and correcting of predisposing conditions that could lead to accidents or to incidents that pose aviation risks. A major component of the ASMM Project is the Aviation Performance Measuring System (APMS), which is developing the next generation of software tools for analyzing and interpreting flight data.
Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines.
Zhou, Lu-Jie; Cao, Qing-Gui; Yu, Kai; Wang, Lin-Lin; Wang, Hai-Bin
2018-04-26
This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines.
Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines
Zhou, Lu-jie; Cao, Qing-gui; Yu, Kai; Wang, Lin-lin; Wang, Hai-bin
2018-01-01
This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines. PMID:29701715
SafetyAnalyst Testing and Implementation
DOT National Transportation Integrated Search
2009-03-01
SafetyAnalyst is a software tool developed by the Federal Highway Administration to assist state and local transportation agencies on analyzing safety data and managing their roadway safety programs. This research report documents the major tasks acc...
A Cyber Security Self-Assessment Method for Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glantz, Clifford S.; Coles, Garill A.; Bass, Robert B.
2004-11-01
A cyber security self-assessment method (the Method) has been developed by Pacific Northwest National Laboratory. The development of the Method was sponsored and directed by the U.S. Nuclear Regulatory Commission. Members of the Nuclear Energy Institute Cyber Security Task Force also played a substantial role in developing the Method. The Method's structured approach guides nuclear power plants in scrutinizing their digital systems, assessing the potential consequences to the plant of a cyber exploitation, identifying vulnerabilities, estimating cyber security risks, and adopting cost-effective protective measures. The focus of the Method is on critical digital assets. A critical digital asset is amore » digital device or system that plays a role in the operation, maintenance, or proper functioning of a critical system (i.e., a plant system that can impact safety, security, or emergency preparedness). A critical digital asset may have a direct or indirect connection to a critical system. Direct connections include both wired and wireless communication pathways. Indirect connections include sneaker-net pathways by which software or data are manually transferred from one digital device to another. An indirect connection also may involve the use of instructions or data stored on a critical digital asset to make adjustments to a critical system. The cyber security self-assessment begins with the formation of an assessment team, and is followed by a six-stage process.« less
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
Non-developmental item computer systems and the malicious software threat
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
The following subject areas are covered: a DOD development system - the Army Secure Operating System; non-development commercial computer systems; security, integrity, and assurance of service (SI and A); post delivery SI and A and malicious software; computer system unique attributes; positive feedback to commercial computer systems vendors; and NDI (Non-Development Item) computers and software safety.
Plutonium Critical Mass Curve Comparison to Mass at Upper Subcritical Limit (USL) Using Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alwin, Jennifer Louise; Zhang, Ning
Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the MCNP ® Monte Carlo radiation transport package. Standard approaches to validation rely on the selection of benchmarks based upon expert judgment. Whisper uses sensitivity/uncertainty (S/U) methods to select relevant benchmarks to a particular application or set of applications being analyzed. Using these benchmarks, Whisper computes a calculational margin. Whisper attempts to quantify the margin of subcriticality (MOS) from errors in software and uncertainties in nuclear data. The combination of the Whisper-derived calculational margin and MOS comprise the baseline upper subcritical limit (USL), tomore » which an additional margin may be applied by the nuclear criticality safety analyst as appropriate to ensure subcriticality. A series of critical mass curves for plutonium, similar to those found in Figure 31 of LA-10860-MS, have been generated using MCNP6.1.1 and the iterative parameter study software, WORM_Solver. The baseline USL for each of the data points of the curves was then computed using Whisper 1.1. The USL was then used to determine the equivalent mass for plutonium metal-water system. ANSI/ANS-8.1 states that it is acceptable to use handbook data, such as the data directly from the LA-10860-MS, as it is already considered validated (Section 4.3 4) “Use of subcritical limit data provided in ANSI/ANS standards or accepted reference publications does not require further validation.”). This paper attempts to take a novel approach to visualize traditional critical mass curves and allows comparison with the amount of mass for which the k eff is equal to the USL (calculational margin + margin of subcriticality). However, the intent is to plot the critical mass data along with USL, not to suggest that already accepted handbook data should have new and more rigorous requirements for validation.« less
Modeling Guidelines for Code Generation in the Railway Signaling Context
NASA Technical Reports Server (NTRS)
Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo
2009-01-01
Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these recommendations has been performed for the automotive control systems domain in order to enforce code generation [7]. The MAAB guidelines have been found profitable also in the aerospace/avionics sector [1] and they have been adopted by the MathWorks Aerospace Leadership Council (MALC). General Electric Transportation Systems (GETS) is a well known railway signaling systems manufacturer leading in Automatic Train Protection (ATP) systems technology. Inside an effort of adopting formal methods within its own development process, GETS decided to introduce system modeling by means of the MathWorks tools [2], and in 2008 chose to move to code generation. This article reports the experience performed by GETS in developing its own modeling standard through customizing the MAAB rules for the railway signaling domain and shows the result of this experience with a successful product development story.
NASA Astrophysics Data System (ADS)
Black, Randy; Bai, Haowei; Michalicek, Andrew; Shelton, Blaine; Villela, Mark
2008-01-01
Currently, autonomy in space applications is limited by a variety of technology gaps. Innovative application of wireless technology and avionics architectural principles drawn from the Orion crew exploration vehicle provide solutions for several of these gaps. The Vision for Space Exploration envisions extensive use of autonomous systems. Economic realities preclude continuing the level of operator support currently required of autonomous systems in space. In order to decrease the number of operators, more autonomy must be afforded to automated systems. However, certification authorities have been notoriously reluctant to certify autonomous software in the presence of humans or when costly missions may be jeopardized. The Orion avionics architecture, drawn from advanced commercial aircraft avionics, is based upon several architectural principles including partitioning in software. Robust software partitioning provides "brick wall" separation between software applications executing on a single processor, along with controlled data movement between applications. Taking advantage of these attributes, non-deterministic applications can be placed in one partition and a "Safety" application created in a separate partition. This "Safety" partition can track the position of astronauts or critical equipment and prevent any unsafe command from executing. Only the Safety partition need be certified to a human rated level. As a proof-of-concept demonstration, Honeywell has teamed with the Ultra WideBand (UWB) Working Group at NASA Johnson Space Center to provide tracking of humans, autonomous systems, and critical equipment. Using UWB the NASA team can determine positioning to within less than one inch resolution, allowing a Safety partition to halt operation of autonomous systems in the event that an unplanned collision is imminent. Another challenge facing autonomous systems is the coordination of multiple autonomous agents. Current approaches address the issue as one of networking and coordination of multiple independent units, each with its own mission. As a proof-of-concept Honeywell is developing and testing various algorithms that lead to a deterministic, fault tolerant, reliable wireless backplane. Just as advanced avionics systems control several subsystems, actuators, sensors, displays, etc.; a single "master" autonomous agent (or base station computer) could control multiple autonomous systems. The problem is simplified to controlling a flexible body consisting of several sensors and actuators, rather than one of coordinating multiple independent units. By filling technology gaps associated with space based autonomous system, wireless technology and Orion architectural principles provide the means for decreasing operational costs and simplifying problems associated with collaboration of multiple autonomous systems.
Why are Formal Methods Not Used More Widely?
NASA Technical Reports Server (NTRS)
Knight, John C.; DeJong, Colleen L.; Gibble, Matthew S.; Nakano, Luis G.
1997-01-01
Despite extensive development over many years and significant demonstrated benefits, formal methods remain poorly accepted by industrial practitioners. Many reasons have been suggested for this situation such as a claim that they extent the development cycle, that they require difficult mathematics, that inadequate tools exist, and that they are incompatible with other software packages. There is little empirical evidence that any of these reasons is valid. The research presented here addresses the question of why formal methods are not used more widely. The approach used was to develop a formal specification for a safety-critical application using several specification notations and assess the results in a comprehensive evaluation framework. The results of the experiment suggests that there remain many impediments to the routine use of formal methods.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
Flight telerobotic servicer legacy
NASA Astrophysics Data System (ADS)
Shattuck, Paul L.; Lowrie, James W.
1992-11-01
The Flight Telerobotic Servicer (FTS) was developed to enhance and provide a safe alternative to human presence in space. The first step for this system was a precursor development test flight (DTF-1) on the Space Shuttle. DTF-1 was to be a pathfinder for manned flight safety of robotic systems. The broad objectives of this mission were three-fold: flight validation of telerobotic manipulator (design, control algorithms, man/machine interfaces, safety); demonstration of dexterous manipulator capabilities on specific building block tasks; and correlation of manipulator performance in space with ground predictions. The DTF-1 system is comprised of a payload bay element (7-DOF manipulator with controllers, end-of-arm gripper and camera, telerobot body with head cameras and electronics module, task panel, and MPESS truss) and an aft flight deck element (force-reflecting hand controller, crew restraint, command and display panel and monitors). The approach used to develop the DTF-1 hardware, software and operations involved flight qualification of components from commercial, military, space, and R controller, end-of-arm tooling, force/torque transducer) and the development of the telerobotic system for space applications. The system is capable of teleoperation and autonomous control (advances state of the art); reliable (two-fault tolerance); and safe (man-rated). Benefits from the development flight included space validation of critical telerobotic technologies and resolution of significant safety issues relating to telerobotic operations in the Shuttle bay or in the vicinity of other space assets. This paper discusses the lessons learned and technology evolution that stemmed from developing and integrating a dexterous robot into a manned system, the Space Shuttle. Particular emphasis is placed on the safety and reliability requirements for a man-rated system as these are the critical factors which drive the overall system architecture. Other topics focused on include: task requirements and operational concepts for servicing and maintenance of space platforms; origins of technology for dexterous robotic systems; issues associated with space qualification of components; and development of the industrial base to support space robotics.
Effective Software Engineering Leadership for Development Programs
ERIC Educational Resources Information Center
Cagle West, Marsha
2010-01-01
Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…
A Case Study of Dynamic Response Analysis and Safety Assessment for a Suspended Monorail System.
Bao, Yulong; Li, Yongle; Ding, Jiajie
2016-11-10
A suspended monorail transit system is a category of urban rail transit, which is effective in alleviating traffic pressure and injury prevention. Meanwhile, with the advantages of low cost and short construction time, suspended monorail transit systems show vast potential for future development. However, the suspended monorail has not been systematically studied in China, and there is a lack of relevant knowledge and analytical methods. To ensure the health and reliability of a suspended monorail transit system, the driving safety of vehicles and structure dynamic behaviors when vehicles are running on the bridge should be analyzed and evaluated. Based on the method of vehicle-bridge coupling vibration theory, the finite element method (FEM) software ANSYS and multi-body dynamics software SIMPACK are adopted respectively to establish the finite element model for bridge and the multi-body vehicle. A co-simulation method is employed to investigate the vehicle-bridge coupling vibration for the transit system. The traffic operation factors, including train formation, track irregularity and tire stiffness, are incorporated into the models separately to analyze the bridge and vehicle responses. The results show that the coupling of dynamic effects of the suspended monorail system between vehicle and bridge are significant in the case studied, and it is strongly suggested to take necessary measures for vibration suppression. The simulation of track irregularity is a critical factor for its vibration safety, and the track irregularity of A-level road roughness negatively influences the system vibration safety.
A Case Study of Dynamic Response Analysis and Safety Assessment for a Suspended Monorail System
Bao, Yulong; Li, Yongle; Ding, Jiajie
2016-01-01
A suspended monorail transit system is a category of urban rail transit, which is effective in alleviating traffic pressure and injury prevention. Meanwhile, with the advantages of low cost and short construction time, suspended monorail transit systems show vast potential for future development. However, the suspended monorail has not been systematically studied in China, and there is a lack of relevant knowledge and analytical methods. To ensure the health and reliability of a suspended monorail transit system, the driving safety of vehicles and structure dynamic behaviors when vehicles are running on the bridge should be analyzed and evaluated. Based on the method of vehicle-bridge coupling vibration theory, the finite element method (FEM) software ANSYS and multi-body dynamics software SIMPACK are adopted respectively to establish the finite element model for bridge and the multi-body vehicle. A co-simulation method is employed to investigate the vehicle-bridge coupling vibration for the transit system. The traffic operation factors, including train formation, track irregularity and tire stiffness, are incorporated into the models separately to analyze the bridge and vehicle responses. The results show that the coupling of dynamic effects of the suspended monorail system between vehicle and bridge are significant in the case studied, and it is strongly suggested to take necessary measures for vibration suppression. The simulation of track irregularity is a critical factor for its vibration safety, and the track irregularity of A-level road roughness negatively influences the system vibration safety. PMID:27834923
Using Combined SFTA and SFMECA Techniques for Space Critical Software
NASA Astrophysics Data System (ADS)
Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.
2012-01-01
This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.
A rule-based system for real-time analysis of control systems
NASA Astrophysics Data System (ADS)
Larson, Richard R.; Millard, D. Edward
1992-10-01
An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.
A rule-based system for real-time analysis of control systems
NASA Technical Reports Server (NTRS)
Larson, Richard R.; Millard, D. Edward
1992-01-01
An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Nuclear criticality safety staff training and qualifications at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monahan, S.P.; McLaughlin, T.P.
1997-05-01
Operations involving significant quantities of fissile material have been conducted at Los Alamos National Laboratory continuously since 1943. Until the advent of the Laboratory`s Nuclear Criticality Safety Committee (NCSC) in 1957, line management had sole responsibility for controlling criticality risks. From 1957 until 1961, the NCSC was the Laboratory body which promulgated policy guidance as well as some technical guidance for specific operations. In 1961 the Laboratory created the position of Nuclear Criticality Safety Office (in addition to the NCSC). In 1980, Laboratory management moved the Criticality Safety Officer (and one other LACEF staff member who, by that time, wasmore » also working nearly full-time on criticality safety issues) into the Health Division office. Later that same year the Criticality Safety Group, H-6 (at that time) was created within H-Division, and staffed by these two individuals. The training and education of these individuals in the art of criticality safety was almost entirely self-regulated, depending heavily on technical interactions between each other, as well as NCSC, LACEF, operations, other facility, and broader criticality safety community personnel. Although the Los Alamos criticality safety group has grown both in size and formality of operations since 1980, the basic philosophy that a criticality specialist must be developed through mentoring and self motivation remains the same. Formally, this philosophy has been captured in an internal policy, document ``Conduct of Business in the Nuclear Criticality Safety Group.`` There are no short cuts or substitutes in the development of a criticality safety specialist. A person must have a self-motivated personality, excellent communications skills, a thorough understanding of the principals of neutron physics, a safety-conscious and helpful attitude, a good perspective of real risk, as well as a detailed understanding of process operations and credible upsets.« less
Transit safety retrofit package development : applications requirements document.
DOT National Transportation Integrated Search
2014-05-01
This Application Requirements Document for the Transit Safety Retrofit Package (TRP) Development captures the system, hardware and software requirements towards fulfilling the technical objectives stated within the contract. To achieve the objective ...
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
NASA Technical Reports Server (NTRS)
2012-01-01
Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Stephen M
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VImore » in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of using SCALE/KENO-VI for criticality analyses; the SCALE/KENO-VI manual provides information on the use of SCALE/KENO-VI and all its modules. The primer also contains an appendix with sample input files.« less
An assessment of space shuttle flight software development processes
NASA Technical Reports Server (NTRS)
1993-01-01
In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.
Identifying Contingency Requirements using Obstacle Analysis on an Unpiloted Aerial Vehicle
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.; Nelson, Stacy; Patterson-Hine, Ann; Frost, Chad R.; Tal, Doron
2005-01-01
This paper describes experience using Obstacle Analysis to identify contingency requirements on an unpiloted aerial vehicle. A contingency is an operational anomaly, and may or may not involve component failure. The challenges to this effort were: ( I ) rapid evolution of the system while operational, (2) incremental autonomy as capabilities were transferred from ground control to software control and (3) the eventual safety-criticality of such systems as they begin to fly over populated areas. The results reported here are preliminary but show that Obstacle Analysis helped (1) identify new contingencies that appeared as autonomy increased; (2) identify new alternatives for handling both previously known and new contingencies; and (3) investigate the continued validity of existing software requirements for contingency handling. Since many mobile, intelligent systems are built using a development process that poses the same challenges, the results appear to have applicability to other similar systems.
Improving the Agency's Software Acquisition Capability
NASA Technical Reports Server (NTRS)
Hankinson, Allen
2003-01-01
External development of software has oftc n led to unsatisfactory results and great frustration for the assurE 7ce community. Contracts frequently omit critical assuranc 4 processes or the right to oversee software development activitie: At a time when NASA depends more and more on software to in plement critical system functions, combination of three factors ex; cerbate this problem: I ) the ever-increasing trend to acquire rather than develop software in-house, 2) the trend toward performance based contracts, and 3) acquisition vehicles that only state softwar 2 requirements while leaving development standards and assur! ince methodologies up to the contractor. We propose to identify specific methods at d tools that NASA projects can use to mitigate the adverse el ects of the three problems. TWO broad classes of methoddt ols will be explored. The first will be those that provide NASA p ojects with insight and oversight into contractors' activities. The st cond will be those that help projects objectively assess, and thus i nprwe, their software acquisition capability. Of particular interest is the Software Engineering Institute's (SEI) Software Acqt isition Capability Maturity Model (SA-CMMO).
Space station: The role of software
NASA Technical Reports Server (NTRS)
Hall, D.
1985-01-01
Software will play a critical role throughout the Space Station Program. This presentation sets the stage and prompts participant interaction at the Software Issues Forum. The presentation is structured into three major topics: (1) an overview of the concept and status of the Space Station Program; (2) several charts designed to lay out the scope and role of software; and (3) information addressing the four specific areas selected for focus at the forum, specifically: software management, the software development environment, languages, and standards. NASA's current thinking is highlighted and some of the relevant critical issues are raised.
CRITICALITY SAFETY CONTROLS AND THE SAFETY BASIS AT PFP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kessler, S
2009-04-21
With the implementation of DOE Order 420.1B, Facility Safety, and DOE-STD-3007-2007, 'Guidelines for Preparing Criticality Safety Evaluations at Department of Energy Non-Reactor Nuclear Facilities', a new requirement was imposed that all criticality safety controls be evaluated for inclusion in the facility Documented Safety Analysis (DSA) and that the evaluation process be documented in the site Criticality Safety Program Description Document (CSPDD). At the Hanford site in Washington State the CSPDD, HNF-31695, 'General Description of the FH Criticality Safety Program', requires each facility develop a linking document called a Criticality Control Review (CCR) to document performance of these evaluations. Chapter 5,more » Appendix 5B of HNF-7098, Criticality Safety Program, provided an example of a format for a CCR that could be used in lieu of each facility developing its own CCR. Since the Plutonium Finishing Plant (PFP) is presently undergoing Deactivation and Decommissioning (D&D), new procedures are being developed for cleanout of equipment and systems that have not been operated in years. Existing Criticality Safety Evaluations (CSE) are revised, or new ones written, to develop the controls required to support D&D activities. Other Hanford facilities, including PFP, had difficulty using the basic CCR out of HNF-7098 when first implemented. Interpretation of the new guidelines indicated that many of the controls needed to be elevated to TSR level controls. Criterion 2 of the standard, requiring that the consequence of a criticality be examined for establishing the classification of a control, was not addressed. Upon in-depth review by PFP Criticality Safety staff, it was not clear that the programmatic interpretation of criterion 8C could be applied at PFP. Therefore, the PFP Criticality Safety staff decided to write their own CCR. The PFP CCR provides additional guidance for the evaluation team to use by clarifying the evaluation criteria in DOE-STD-3007-2007. In reviewing documents used in classifying controls for Nuclear Safety, it was noted that DOE-HDBK-1188, 'Glossary of Environment, Health, and Safety Terms', defines an Administrative Control (AC) in terms that are different than typically used in Criticality Safety. As part of this CCR, a new term, Criticality Administrative Control (CAC) was defined to clarify the difference between an AC used for criticality safety and an AC used for nuclear safety. In Nuclear Safety terms, an AC is a provision relating to organization and management, procedures, recordkeeping, assessment, and reporting necessary to ensure safe operation of a facility. A CAC was defined as an administrative control derived in a criticality safety analysis that is implemented to ensure double contingency. According to criterion 2 of Section IV, 'Linkage to the Documented Safety Analysis', of DOESTD-3007-2007, the consequence of a criticality should be examined for the purposes of classifying the significance of a control or component. HNF-PRO-700, 'Safety Basis Development', provides control selection criteria based on consequence and risk that may be used in the development of a Criticality Safety Evaluation (CSE) to establish the classification of a component as a design feature, as safety class or safety significant, i.e., an Engineered Safety Feature (ESF), or as equipment important to safety; or merely provides defense-in-depth. Similar logic is applied to the CACs. Criterion 8C of DOE-STD-3007-2007, as written, added to the confusion of using the basic CCR from HNF-7098. The PFP CCR attempts to clarify this criterion by revising it to say 'Programmatic commitments or general references to control philosophy (e.g., mass control or spacing control or concentration control as an overall control strategy for the process without specific quantification of individual limits) is included in the PFP DSA'. Table 1 shows the PFP methodology for evaluating CACs. This evaluation process has been in use since February of 2008 and has proven to be simple and effective. Each control identified in the applicable new/revised CSE is evaluated via the table. The results of this evaluation are documented in tables attached to the CCR as an appendix, for each CSE, to the base document.« less
NASA Astrophysics Data System (ADS)
Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián
2018-02-01
In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.
RT-Syn: A real-time software system generator
NASA Technical Reports Server (NTRS)
Setliff, Dorothy E.
1992-01-01
This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.
Challenges and Demands on Automated Software Revision
NASA Technical Reports Server (NTRS)
Bonakdarpour, Borzoo; Kulkarni, Sandeep S.
2008-01-01
In the past three decades, automated program verification has undoubtedly been one of the most successful contributions of formal methods to software development. However, when verification of a program against a logical specification discovers bugs in the program, manual manipulation of the program is needed in order to repair it. Thus, in the face of existence of numerous unverified and un- certified legacy software in virtually any organization, tools that enable engineers to automatically verify and subsequently fix existing programs are highly desirable. In addition, since requirements of software systems often evolve during the software life cycle, the issue of incomplete specification has become a customary fact in many design and development teams. Thus, automated techniques that revise existing programs according to new specifications are of great assistance to designers, developers, and maintenance engineers. As a result, incorporating program synthesis techniques where an algorithm generates a program, that is correct-by-construction, seems to be a necessity. The notion of manual program repair described above turns out to be even more complex when programs are integrated with large collections of sensors and actuators in hostile physical environments in the so-called cyber-physical systems. When such systems are safety/mission- critical (e.g., in avionics systems), it is essential that the system reacts to physical events such as faults, delays, signals, attacks, etc, so that the system specification is not violated. In fact, since it is impossible to anticipate all possible such physical events at design time, it is highly desirable to have automated techniques that revise programs with respect to newly identified physical events according to the system specification.
Survey of Software Assurance Techniques for Highly Reliable Systems
NASA Technical Reports Server (NTRS)
Nelson, Stacy
2004-01-01
This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.
Thermal Performance Data Services (TPDS)
NASA Technical Reports Server (NTRS)
French, Richard T.; Wright, Michael J.
2013-01-01
Initiated as a NASA Engineering and Safety Center (NESC) assessment in 2009, the Thermal Performance Database (TPDB) was a response to the need for a centralized thermal performance data archive. The assessment was renamed Thermal Performance Data Services (TPDS) in 2012; the undertaking has had two fronts of activity: the development of a repository software application and the collection of historical thermal performance data sets from dispersed sources within the thermal performance community. This assessment has delivered a foundational tool on which additional features should be built to increase efficiency, expand the protection of critical Agency investments, and provide new discipline-advancing work opportunities. This report contains the information from the assessment.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
Space Tug avionics definition study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1975-01-01
A top down approach was used to identify, compile, and develop avionics functional requirements for all flight and ground operational phases. Such requirements as safety mission critical functions and criteria, minimum redundancy levels, software memory sizing, power for tug and payload, data transfer between payload, tug, shuttle, and ground were established. Those functional requirements that related to avionics support of a particular function were compiled together under that support function heading. This unique approach provided both organizational efficiency and traceability back to the applicable operational phase and event. Each functional requirement was then allocated to the appropriate subsystems and its particular characteristics were quantified.
Applications of Formal Methods to Specification and Safety of Avionics Software
NASA Technical Reports Server (NTRS)
Hoover, D. N.; Guaspari, David; Humenn, Polar
1996-01-01
This report treats several topics in applications of formal methods to avionics software development. Most of these topics concern decision tables, an orderly, easy-to-understand format for formally specifying complex choices among alternative courses of action. The topics relating to decision tables include: generalizations fo decision tables that are more concise and support the use of decision tables in a refinement-based formal software development process; a formalism for systems of decision tables with behaviors; an exposition of Parnas tables for users of decision tables; and test coverage criteria and decision tables. We outline features of a revised version of ORA's decision table tool, Tablewise, which will support many of the new ideas described in this report. We also survey formal safety analysis of specifications and software.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Matus, Bethany A; Bridges, Kayla M; Logomarsino, John V
2018-06-21
Individualized feeding care plans and safe handling of milk (human or formula) are critical in promoting growth, immune function, and neurodevelopment in the preterm infant. Feeding errors and disruptions or limitations to feeding processes in the neonatal intensive care unit (NICU) are associated with negative safety events. Feeding errors include contamination of milk and delivery of incorrect or expired milk and may result in adverse gastrointestinal illnesses. The purpose of this review was to evaluate the effect(s) of centralized milk preparation, use of trained technicians, use of bar code-scanning software, and collaboration between registered dietitians and registered nurses on feeding safety in the NICU. A systematic review of the literature was completed, and 12 articles were selected as relevant to search criteria. Study quality was evaluated using the Downs and Black scoring tool. An evaluation of human studies indicated that the use of centralized milk preparation, trained technicians, bar code-scanning software, and possible registered dietitian involvement decreased feeding-associated error in the NICU. A state-of-the-art NICU includes a centralized milk preparation area staffed by trained technicians, care supported by bar code-scanning software, and utilization of a registered dietitian to improve patient safety. These resources will provide nurses more time to focus on nursing-specific neonatal care. Further research is needed to evaluate the impact of factors related to feeding safety in the NICU as well as potential financial benefits of these quality improvement opportunities.
Certification of Safety-Critical Software Under DO-178C and DO-278A
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2012-01-01
The RTCA has recently released DO-178C and DO-278A as new certification guidance for the production of airborne and ground-based air traffic management software, respectively. Additionally, RTCA special committee SC-205 has also produced, at the same time, five other companion documents. These documents are RTCA DO-248C, DO-330, DO-331, DO- 332, and DO-333. These supplements address frequently asked questions about software certification, provide guidance on tool qualification requirements, and illustrate the modifications recommended to DO-178C when using model-based software design, object oriented programming, and formal methods. The objective of this paper is to first explain the relationship of DO-178C to the former DO-178B in order to give those familiar with DO- 178B an indication of what has been changed and what has not been changed. With this background, the relationship of DO-178C and DO-278 to the new DO-278A document for ground-based software development is shown. Last, an overview of the new guidance contained in the tool qualification document and the three new supplements to DO-178C and DO-278A is presented. For those unfamiliar with DO-178B, this paper serves to provide an entry point to this new certification guidance for airborne and ground-based CNS/ATM software certification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutti, V; Morrow, A; Kim, S
Purpose: Stereotactic radiosurgery (SRS) treatments using conical collimators can potentially result in gantry collision with treatment table due to limited collision-clear spaces. An in-house software was developed to help the SRS treatment planner mitigate potential SRS conical collimator (Varian Medical System, Palo Alto, CA) collisions with the treatment table. This software was designed to remove treatment re-planning secondary to unexpected collisions. Methods: A BrainLAB SRS ICT Frameless Extension used for SRS treatments in our clinic was mathematically modelled using surface points registered to the 3D co-ordinate space of the couch extension. The surface points are transformed based on the treatmentmore » isocenter point and potential collisions are determined in 3D space for couch and gantry angle combinations. The distance between the SRS conical collimators and LINAC isocenter is known. The collision detection model was programmed in MATLAB (Mathwork, Natick, MA) to display graphical plots of the calculations, and the plotted data is used to avoid the gantry and couch angle combinations that would likely result in a collision. We have utilized the cone collision tool for 23 SRS cone treatment plans (8 retrospective and 15 prospective for 10 patients). Results: Twenty one plans strongly agreed with the software tool prediction for collision. However, in two plans, a collision was observed with a 0.5 cm margin when the software predicted no collision. Therefore, additional margins were added to the clearance criteria in the program to achieve a lower risk of actual collisions. Conclusion: Our in-house developed collision check software successfully avoided SRS cone re-planning by 91.3% due to a reduction in cone collisions with the treatment table. Future developments to our software will include a CT image data set based collision prediction model as well as a beam angle optimization tool to avoid normal critical tissues as well as previously treated lesions.« less
48 CFR 52.250-5 - SAFETY Act-Equitable Adjustment.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., engineering services, software development services, software integration services, threat assessments... security, i.e., it will perform as intended, conforms to the seller's specifications, and is safe for use...
48 CFR 52.250-5 - SAFETY Act-Equitable Adjustment.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., engineering services, software development services, software integration services, threat assessments... security, i.e., it will perform as intended, conforms to the seller's specifications, and is safe for use...
48 CFR 52.250-5 - SAFETY Act-Equitable Adjustment.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., engineering services, software development services, software integration services, threat assessments... security, i.e., it will perform as intended, conforms to the seller's specifications, and is safe for use...
48 CFR 52.250-5 - SAFETY Act-Equitable Adjustment.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., engineering services, software development services, software integration services, threat assessments... security, i.e., it will perform as intended, conforms to the seller's specifications, and is safe for use...
Initial development of prototype performance model for highway design
DOT National Transportation Integrated Search
1997-12-01
The Federal Highway Administration (FHWA) has undertaken a multiyear project to develop the Interactive Highway Safety Design Model (IHSDM), which is a CADD-based integrated set of software tools to analyze a highway design to identify safety issues ...
Infusing Reliability Techniques into Software Safety Analysis
NASA Technical Reports Server (NTRS)
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
ERIC Educational Resources Information Center
Grush, Mary
2009-01-01
As colleges and universities rely more heavily on software as a service (SaaS), they're putting more critical data in the cloud. What are the security issues, and how are cloud providers responding? "Campus Technology" ("CT") went to three higher ed SaaS vendors--Google, IBM, and TopSchool--and asked them to share their thoughts about the state of…
Schwebel, David C; Morrongiello, Barbara A; Davis, Aaron L; Stewart, Julia; Bell, Melissa
2012-04-01
Pre-post-randomized design evaluated The Blue Dog, a dog safety software program. 76 children aged 3.5-6 years completed 3 tasks to evaluate dog safety pre- and postintervention: (a) pictures (recognition of safe/risky behavior), (b) dollhouse (recall of safe behavior via simulated dollhouse scenarios), and (c) live dog (actual behavior with unfamiliar live dog). Following preintervention evaluation, children were randomly assigned to dog or fire safety conditions, each involving 3 weeks of home computer software use. Children using Blue Dog had greater change in recognition of risky dog situations than children learning fire safety. No between-group differences emerged in recall (dollhouse) or engagement (live-dog) in risky behavior. Families enjoyed using the software. Blue Dog taught children knowledge about safe engagement with dogs, but did not influence recall or implementation of safe behaviors. Dog bites represent a significant pediatric injury concern and continued development of effective interventions is needed.
Software security checklist for the software life cycle
NASA Technical Reports Server (NTRS)
Gilliam, D. P.; Wolfe, T. L.; Sherif, J. S.
2002-01-01
A formal approach to security in the software life cycle is essential to protect corporate resources. However, little thought has been given to this aspect of software development. Due to its criticality, security should be integrated as a formal approach in the software life cycle.
Parker, Steve; Mayner, Lidia; Michael Gillham, David
2015-12-01
Undergraduate nursing students are often confused by multiple understandings of critical thinking. In response to this situation, the Critiique for critical thinking (CCT) project was implemented to provide consistent structured guidance about critical thinking. This paper introduces Critiique software, describes initial validation of the content of this critical thinking tool and explores wider applications of the Critiique software. Critiique is flexible, authorable software that guides students step-by-step through critical appraisal of research papers. The spelling of Critiique was deliberate, so as to acquire a unique web domain name and associated logo. The CCT project involved implementation of a modified nominal focus group process with academic staff working together to establish common understandings of critical thinking. Previous work established a consensus about critical thinking in nursing and provided a starting point for the focus groups. The study was conducted at an Australian university campus with the focus group guided by open ended questions. Focus group data established categories of content that academic staff identified as important for teaching critical thinking. This emerging focus group data was then used to inform modification of Critiique software so that students had access to consistent and structured guidance in relation to critical thinking and critical appraisal. The project succeeded in using focus group data from academics to inform software development while at the same time retaining the benefits of broader philosophical dimensions of critical thinking.
Parker, Steve; Mayner, Lidia; Michael Gillham, David
2015-01-01
Background: Undergraduate nursing students are often confused by multiple understandings of critical thinking. In response to this situation, the Critiique for critical thinking (CCT) project was implemented to provide consistent structured guidance about critical thinking. Objectives: This paper introduces Critiique software, describes initial validation of the content of this critical thinking tool and explores wider applications of the Critiique software. Materials and Methods: Critiique is flexible, authorable software that guides students step-by-step through critical appraisal of research papers. The spelling of Critiique was deliberate, so as to acquire a unique web domain name and associated logo. The CCT project involved implementation of a modified nominal focus group process with academic staff working together to establish common understandings of critical thinking. Previous work established a consensus about critical thinking in nursing and provided a starting point for the focus groups. The study was conducted at an Australian university campus with the focus group guided by open ended questions. Results: Focus group data established categories of content that academic staff identified as important for teaching critical thinking. This emerging focus group data was then used to inform modification of Critiique software so that students had access to consistent and structured guidance in relation to critical thinking and critical appraisal. Conclusions: The project succeeded in using focus group data from academics to inform software development while at the same time retaining the benefits of broader philosophical dimensions of critical thinking. PMID:26835469
Surgeon Training in Telerobotic Surgery via a Hardware-in-the-Loop Simulator
Alemzadeh, Homa; Chen, Daniel; Kalbarczyk, Zbigniew; Iyer, Ravishankar K.; Kesavadas, Thenkurussi
2017-01-01
This work presents a software and hardware framework for a telerobotic surgery safety and motor skill training simulator. The aims are at providing trainees a comprehensive simulator for acquiring essential skills to perform telerobotic surgery. Existing commercial robotic surgery simulators lack features for safety training and optimal motion planning, which are critical factors in ensuring patient safety and efficiency in operation. In this work, we propose a hardware-in-the-loop simulator directly introducing these two features. The proposed simulator is built upon the Raven-II™ open source surgical robot, integrated with a physics engine and a safety hazard injection engine. Also, a Fast Marching Tree-based motion planning algorithm is used to help trainee learn the optimal instrument motion patterns. The main contributions of this work are (1) reproducing safety hazards events, related to da Vinci™ system, reported to the FDA MAUDE database, with a novel haptic feedback strategy to provide feedback to the operator when the underlying dynamics differ from the real robot's states so that the operator will be aware and can mitigate the negative impact of the safety-critical events, and (2) using motion planner to generate semioptimal path in an interactive robotic surgery training environment. PMID:29065635
The integration of the risk management process with the lifecycle of medical device software.
Pecoraro, F; Luzi, D
2014-01-01
The application of software in the Medical Device (MD) domain has become central to the improvement of diagnoses and treatments. The new European regulations that specifically address software as an important component of MD, require complex procedures to make software compliant with safety requirements, introducing thereby new challenges in the qualification and classification of MD software as well as in the performance of risk management activities. Under this perspective, the aim of this paper is to propose an integrated framework that combines the activities to be carried out by the manufacturer to develop safe software within the development lifecycle based on the regulatory requirements reported in US and European regulations as well as in the relevant standards and guidelines. A comparative analysis was carried out to identify the main issues related to the application of the current new regulations. In addition, standards and guidelines recently released to harmonise procedures for the validation of MD software have been used to define the risk management activities to be carried out by the manufacturer during the software development process. This paper highlights the main issues related to the qualification and classification of MD software, providing an analysis of the different regulations applied in Europe and the US. A model that integrates the risk management process within the software development lifecycle has been proposed too. It is based on regulatory requirements and considers software risk analysis as a central input to be managed by the manufacturer already at the initial stages of the software design, in order to prevent MD failures. Relevant changes in the process of MD development have been introduced with the recognition of software being an important component of MDs as stated in regulations and standards. This implies the performance of highly iterative processes that have to integrate the risk management in the framework of software development. It also makes it necessary to involve both medical and software engineering competences to safeguard patient and user safety.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less
NASA Technical Reports Server (NTRS)
Benowitz, E.; Niessner, A.
2003-01-01
This work involves developing representative mission-critical spacecraft software using the Real-Time Specification for Java (RTSJ). This work currently leverages actual flight software used in the design of actual flight software in the NASA's Deep Space 1 (DSI), which flew in 1998.
Software Development Standard for Mission Critical Systems
2014-03-17
new development, modification, reuse, reengineering, maintenance , or any other activity or combination of activities resulting in products . Within...develops” includes new development, modification, integration, reuse, reengineering, maintenance , or any other activity that results in products ... Maintenance organization. The organization that is responsible for modifying and otherwise sustaining the software and other software products and
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1998-01-01
Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valerio, Luis G.; Arvidson, Kirk B.; Chanderbhan, Ronald F.
2007-07-01
Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest ismore » MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals, comprised primarily of pharmaceutical, industrial and some natural products developed under an FDA-MDL cooperative research and development agreement (CRADA). The predictive performance for this group of dietary natural products and the control group was 97% sensitivity and 80% concordance. Specificity was marginal at 53%. This study finds that the in silico QSAR analysis employing this software's rodent carcinogenicity database is capable of identifying the rodent carcinogenic potential of naturally occurring organic molecules found in the human diet with a high degree of sensitivity. It is the first study to demonstrate successful QSAR predictive modeling of naturally occurring carcinogens found in the human diet using an external validation test. Further test validation of this software and expansion of the training data set for dietary chemicals will help to support the future use of such QSAR methods for screening and prioritizing the risk of dietary chemicals when actual animal data are inadequate, equivocal, or absent.« less
NASA Technical Reports Server (NTRS)
Uber, James G.
1988-01-01
Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.
NASA Technical Reports Server (NTRS)
1992-01-01
This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.
Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base
NASA Technical Reports Server (NTRS)
Bryant, Richard B., Jr.; Carrelli, David J.
2006-01-01
The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.
Long range targeting for space based rendezvous
NASA Technical Reports Server (NTRS)
Everett, Louis J.; Redfield, R. C.
1995-01-01
The work performed under this grant supported the Dexterous Flight Experiment one STS-62 The project required developing hardware and software for automating a TRAC sensor on orbit. The hardware developed by for the flight has been documented through standard NASA channels since it has to pass safety, environmental, and other issues. The software has not been documented previously, therefore, this report provides a software manual for the TRAC code developed for the grant.
Aluminum Data Measurements and Evaluation for Criticality Safety Applications
NASA Astrophysics Data System (ADS)
Leal, L. C.; Guber, K. H.; Spencer, R. R.; Derrien, H.; Wright, R. Q.
2002-12-01
The Defense Nuclear Facility Safety Board (DNFSB) Recommendation 93-2 motivated the US Department of Energy (DOE) to develop a comprehensive criticality safety program to maintain and to predict the criticality of systems throughout the DOE complex. To implement the response to the DNFSB Recommendation 93-2, a Nuclear Criticality Safety Program (NCSP) was created including the following tasks: Critical Experiments, Criticality Benchmarks, Training, Analytical Methods, and Nuclear Data. The Nuclear Data portion of the NCSP consists of a variety of differential measurements performed at the Oak Ridge Electron Linear Accelerator (ORELA) at the Oak Ridge National Laboratory (ORNL), data analysis and evaluation using the generalized least-squares fitting code SAMMY in the resolved, unresolved, and high energy ranges, and the development and benchmark testing of complete evaluations for a nuclide for inclusion into the Evaluated Nuclear Data File (ENDF/B). This paper outlines the work performed at ORNL to measure, evaluate, and test the nuclear data for aluminum for applications in criticality safety problems.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a NASA-Ames Space Act Agreement, Coryphaeus Software and Simauthor, Inc., developed an Aviation Performance Measuring System (APMS). This software, developed for the aerospace and airline industry, enables the replay of Digital Flight Data Recorder (DFDR) data in a flexible, user-configurable, real-time, high fidelity 3D (three dimensional) environment.
Earthern embankment overtopping analysis using the WinDAM B software
USDA-ARS?s Scientific Manuscript database
Over 11,000 small watershed dams have been constructed with USDA involvement over an eighty year period. WinDAM B software has been developed to help engineers address dam safety concerns relative to potential overtopping of these earthen embankments. The primary function of the software is threef...
An Introduction to Flight Software Development: FSW Today, FSW 2010
NASA Technical Reports Server (NTRS)
Gouvela, John
2004-01-01
Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,
Techniques for development of safety-related software for surgical robots.
Varley, P
1999-12-01
Regulatory bodies require evidence that software controlling potentially hazardous devices is developed to good manufacturing practices. Effective techniques used in other industries assume long timescales and high staffing levels and can be unsuitable for use without adaptation in developing electronic healthcare devices. This paper discusses a set of techniques used in practice to develop software for a particular innovative medical product, an endoscopic camera manipulator. These techniques include identification of potential hazards and tracing their mitigating factors through the project lifecycle.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Software Risk Identification for Interplanetary Probes
NASA Technical Reports Server (NTRS)
Dougherty, Robert J.; Papadopoulos, Periklis E.
2005-01-01
The need for a systematic and effective software risk identification methodology is critical for interplanetary probes that are using increasingly complex and critical software. Several probe failures are examined that suggest more attention and resources need to be dedicated to identifying software risks. The direct causes of these failures can often be traced to systemic problems in all phases of the software engineering process. These failures have lead to the development of a practical methodology to identify risks for interplanetary probes. The proposed methodology is based upon the tailoring of the Software Engineering Institute's (SEI) method of taxonomy-based risk identification. The use of this methodology will ensure a more consistent and complete identification of software risks in these probes.
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
Streamlining Software Aspects of Certification: Report on the SSAC Survey
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Dorsey, Cheryl A.; Knight, John C.; Leveson, Nancy G.; McCormick, G. Frank
1999-01-01
The aviation system now depends on information technology more than ever before to ensure safety and efficiency. To address concerns about the efficacy of software aspects of the certification process, the Federal Aviation Administration (FAA) began the Streamlining Software Aspects of Certification (SSAC) program. The SSAC technical team was commissioned to gather data, analyze results, and propose recommendations to maximize efficiency and minimize cost and delay, without compromising safety. The technical team conducted two public workshops to identify and prioritize software approval issues, and conducted a survey to validate the most urgent of those issues. The SSAC survey, containing over two hundred questions about the FAA's software approval process, reached over four hundred industry software developers, aircraft manufacturers, and FAA designated engineering representatives. Three hundred people responded. This report presents the SSAC program rationale, survey process, preliminary findings, and recommendations.
Verified compilation of Concurrent Managed Languages
2017-11-01
designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A
NASA Astrophysics Data System (ADS)
Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.
2014-12-01
Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single metadata catalog. The entire BiG CZ Software system is being developed on public repositories as a modular suite of open source software projects. It will be built around a new Observations Data Model Version 2.0 (ODM2) that has been developed by members of the BiG CZ project team, with community input, under separate funding.
Modeling defect trends for iterative development
NASA Technical Reports Server (NTRS)
Powell, J. D.; Spanguolo, J. N.
2003-01-01
The Employment of Defects (EoD) approach to measuring and analyzing defects seeks to identify and capture trends and phenomena that are critical to managing software quality in the iterative software development lifecycle at JPL.
Fault Detection and Safety in Closed-Loop Artificial Pancreas Systems
2014-01-01
Continuous subcutaneous insulin infusion pumps and continuous glucose monitors enable individuals with type 1 diabetes to achieve tighter blood glucose control and are critical components in a closed-loop artificial pancreas. Insulin infusion sets can fail and continuous glucose monitor sensor signals can suffer from a variety of anomalies, including signal dropout and pressure-induced sensor attenuations. In addition to hardware-based failures, software and human-induced errors can cause safety-related problems. Techniques for fault detection, safety analyses, and remote monitoring techniques that have been applied in other industries and applications, such as chemical process plants and commercial aircraft, are discussed and placed in the context of a closed-loop artificial pancreas. PMID:25049365
2011 Annual Criticality Safety Program Performance Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrea Hoffman
The 2011 review of the INL Criticality Safety Program has determined that the program is robust and effective. The review was prepared for, and fulfills Contract Data Requirements List (CDRL) item H.20, 'Annual Criticality Safety Program performance summary that includes the status of assessments, issues, corrective actions, infractions, requirements management, training, and programmatic support.' This performance summary addresses the status of these important elements of the INL Criticality Safety Program. Assessments - Assessments in 2011 were planned and scheduled. The scheduled assessments included a Criticality Safety Program Effectiveness Review, Criticality Control Area Inspections, a Protection of Controlled Unclassified Information Inspection,more » an Assessment of Criticality Safety SQA, and this management assessment of the Criticality Safety Program. All of the assessments were completed with the exception of the 'Effectiveness Review' for SSPSF, which was delayed due to emerging work. Although minor issues were identified in the assessments, no issues or combination of issues indicated that the INL Criticality Safety Program was ineffective. The identification of issues demonstrates the importance of an assessment program to the overall health and effectiveness of the INL Criticality Safety Program. Issues and Corrective Actions - There are relatively few criticality safety related issues in the Laboratory ICAMS system. Most were identified by Criticality Safety Program assessments. No issues indicate ineffectiveness in the INL Criticality Safety Program. All of the issues are being worked and there are no imminent criticality concerns. Infractions - There was one criticality safety related violation in 2011. On January 18, 2011, it was discovered that a fuel plate bundle in the Nuclear Materials Inspection and Storage (NMIS) facility exceeded the fissionable mass limit, resulting in a technical safety requirement (TSR) violation. The TSR limits fuel plate bundles to 1085 grams U-235, which is the maximum loading of an ATR fuel element. The overloaded fuel plate bundle contained 1097 grams U-235 and was assembled under an 1100 gram U-235 limit in 1982. In 2003, the limit was reduced to 1085 grams citing a new criticality safety evaluation for ATR fuel elements. The fuel plate bundle inventories were not checked for compliance prior to implementing the reduced limit. A subsequent review of the NMIS inventory did not identify further violations. Requirements Management - The INL Criticality Safety program is organized and well documented. The source requirements for the INL Criticality Safety Program are from 10 CFR 830.204, DOE Order 420.1B, Chapter III, 'Nuclear Criticality Safety,' ANSI/ANS 8-series Industry Standards, and DOE Standards. These source requirements are documented in LRD-18001, 'INL Criticality Safety Program Requirements Manual.' The majority of the criticality safety source requirements are contained in DOE Order 420.1B because it invokes all of the ANSI/ANS 8-Series Standards. DOE Order 420.1B also invokes several DOE Standards, including DOE-STD-3007, 'Guidelines for Preparing Criticality Safety Evaluations at Department of Energy Non-Reactor Nuclear Facilities.' DOE Order 420.1B contains requirements for DOE 'Heads of Field Elements' to approve the criticality safety program and specific elements of the program, namely, the qualification of criticality staff and the method for preparing criticality safety evaluations. This was accomplished by the approval of SAR-400, 'INL Standardized Nuclear Safety Basis Manual,' Chapter 6, 'Prevention of Inadvertent Criticality.' Chapter 6 of SAR-400 contains sufficient detail and/or reference to the specific DOE and contractor documents that adequately describe the INL Criticality Safety Program per the elements specified in DOE Order 420.1B. The Safety Evaluation Report for SAR-400 specifically recognizes that the approval of SAR-400 approves the INL Criticality Safety Program. No new source requirements were released in 2011. A revision to LRD-18001 is planned for 2012 to clarify design requirements for criticality alarms. Training - Criticality Safety Engineering has developed training and provides training for many employee positions, including fissionable material handlers, facility managers, criticality safety officers, firefighters, and criticality safety engineers. Criticality safety training at the INL is a program strength. A revision to the training module developed in 2010 to supplement MFC certified fissionable material handlers (operators) training was prepared and presented in August of 2011. This training, 'Applied Science of Criticality Safety,' builds upon existing training and gives operators a better understanding of how their criticality controls are derived. Improvements to 00INL189, 'INL Criticality Safety Principles' are planned for 2012 to strengthen fissionable material handler training.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Rethinking healthcare as a safety--critical industry.
Lwears, Robert
2012-01-01
The discipline of ergonomics, or human factors engineering, has made substantial contributions to both the development of a science of safety, and to the improvement of safety in a wide variety of hazardous industries, including nuclear power, aviation, shipping, energy extraction and refining, military operations, and finance. It is notable that healthcare, which in most advanced societies is a substantial sector of the economy (eg, 15% of US gross domestic product) and has been associated with large volumes of potentially preventable morbidity and mortality, has heretofore not been viewed as a safety-critical industry. This paper proposes that improving safety performance in healthcare must involve a re-envisioning of healthcare itself as a safety-critical industry, but one with considerable differences from most engineered safety-critical systems. This has implications both for healthcare, and for conceptions of safety-critical industries.
From Prototype to Product: Making Participatory Design of mHealth Commercially Viable.
Andersen, Tariq O; Bansler, Jørgen P; Kensing, Finn; Moll, Jonas
2017-01-01
This paper delves into the challenges of engaging patients, clinicians and industry stakeholders in the participatory design of an mHealth platform for patient-clinician collaboration. It follows the process from the development of a research prototype to a commercial software product. In particular, we draw attention to four major challenges of (a) aligning the different concerns of patients and clinicians, (b) designing according to clinical accountability, (c) ensuring commercial interest, and (d) dealing with regulatory constraints when prototyping safety critical health Information Technology. Using four illustrative cases, we discuss what these challenges entail and the implications they pose to Participatory Design. We conclude the paper by presenting lessons learned.
Development of Autonomous Aerobraking - Phase 2
NASA Technical Reports Server (NTRS)
Murri, Daniel G.
2013-01-01
Phase 1 of the Development of Autonomous Aerobraking (AA) Assessment investigated the technical capability of transferring the processes of aerobraking maneuver (ABM) decision-making (currently performed on the ground by an extensive workforce and communicated to the spacecraft via the deep space network) to an efficient flight software algorithm onboard the spacecraft. This document describes Phase 2 of this study, which was a 12-month effort to improve and rigorously test the AA Development Software developed in Phase 1. Aerobraking maneuver; Autonomous Aerobraking; Autonomous Aerobraking Development Software; Deep Space Network; NASA Engineering and Safety Center
Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)
NASA Astrophysics Data System (ADS)
Valentine, Timothy
2017-09-01
The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.
Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2008-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.
A Comparison of Two Approaches to Safety Analysis Based on Use Cases
NASA Astrophysics Data System (ADS)
Stålhane, Tor; Sindre, Guttorm
Engineering has a long tradition in analyzing the safety of mechanical, electrical and electronic systems. Important methods like HazOp and FMEA have also been adopted by the software engineering community. The misuse case method, on the other hand, has been developed by the software community as an alternative to FMEA and preliminary HazOp for software development. To compare the two methods misuse case and FMEA we have run a small experiment involving 42 third year software engineering students. In the experiment, the students should identify and analyze failure modes from one of the use cases for a commercial electronic patient journals system. The results of the experiment show that on the average, the group that used misuse cases identified and analyzed more user related failure modes than the persons using FMEA. In addition, the persons who used the misuse cases scored better on perceived ease of use and intention to use.
NASA Astrophysics Data System (ADS)
Mbaya, Timmy
Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.
Software development environments: Present and future, appendix D
NASA Technical Reports Server (NTRS)
Riddle, W. E.
1980-01-01
Computerized environments which facilitate the development of appropriately functioning software systems are discussed. Their current status is reviewed and several trends exhibited by their history are identified. A number of principles, some at (slight) variance with the historical trends, are suggested and it is argued that observance of these principles is critical to achieving truly effective and efficient software development support environments.
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
Addressing software security risk mitigations in the life cycle
NASA Technical Reports Server (NTRS)
Gilliam, David; Powell, John; Haugh, Eric; Bishop, Matt
2003-01-01
The NASA Office of Safety and Mission Assurance (OSMA) has funded the Jet Propulsion Laboratory (JPL) with a Center Initiative, 'Reducing Software Security Risk through an Integrated Approach' (RSSR), to address this need. The Initiative is a formal approach to addressing software security in the life cycle through the instantiation of a Software Security Assessment Instrument (SSAI) for the development and maintenance life cycles.
Criticality Safety Evaluation of the LLNL Inherently Safe Subcritical Assembly (ISSA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percher, Catherine
2012-06-19
The LLNL Nuclear Criticality Safety Division has developed a training center to illustrate criticality safety and reactor physics concepts through hands-on experimental training. The experimental assembly, the Inherently Safe Subcritical Assembly (ISSA), uses surplus highly enriched research reactor fuel configured in a water tank. The training activities will be conducted by LLNL following the requirements of an Integration Work Sheet (IWS) and associated Safety Plan. Students will be allowed to handle the fissile material under the supervision of LLNL instructors. This report provides the technical criticality safety basis for instructional operations with the ISSA experimental assembly.
Computer Software Training and HRD: What Are the Critical Issues?
ERIC Educational Resources Information Center
Altemeyer, Brad
2005-01-01
The paper explores critical issues for HRD practice from a parsonian framework across the HRD legs of organizational development, adult learning, and training and development. Insights into the critical issues emerge from this approach. Identifying successful transfer of training to be critical for organizational, group, and individual success.…
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Obtaining Valid Safety Data for Software Safety Measurement and Process Improvement
NASA Technical Reports Server (NTRS)
Basili, Victor r.; Zelkowitz, Marvin V.; Layman, Lucas; Dangle, Kathleen; Diep, Madeline
2010-01-01
We report on a preliminary case study to examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Our goal is to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. Our purpose was two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to identify potential risks due to incorrect application of the safety process, deficiencies in the safety process, or the lack of a defined process. One early outcome of this work was to show that there are structural deficiencies in collecting valid safety data that make software safety different from hardware safety. In our conclusions we present some of these deficiencies.
Knowledge-based assistance in costing the space station DMS
NASA Technical Reports Server (NTRS)
Henson, Troy; Rone, Kyle
1988-01-01
The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1996-01-01
Research in the year covered by this reporting period has been primarily directed toward: continued development of mock-ups of computer screens for operator of a digital reactor control system; development of a reactor simulation to permit testing of various elements of the control system; formal specification of user interfaces; fault-tree analysis including software; evaluation of formal verification techniques; and continued development of a software documentation system. Technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers.
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Teubert, Chris; Gorospe, George; Burgett, Drew; Quach, Cuong C.; Hogge, Edward
2016-01-01
The airspace is becoming more and more complicated, and will continue to do so in the future with the integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, other forms of aviation technology into the airspace. The new technology and complexity increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems & systems of systems can be very difficult, expensive, and sometimes unsafe in real life scenarios. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. The framework injects flight related anomalies related to ground systems, routing, airport congestion, etc. to test and verify algorithms for NAS safety. In our research work, we develop a live, distributed, hardware-in-the-loop testbed for aviation and airspace prognostics along with exploring further research possibilities to verify and validate future algorithms for NAS safety. The testbed integrates virtual aircraft using the X-Plane simulator and X-PlaneConnect toolbox, UAVs using onboard sensors and cellular communications, and hardware in the loop components. In addition, the testbed includes an additional research framework to support and simplify future research activities. It enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. This paper describes the design, development, and testing of this system. Software reliability, safety and latency are some of the critical design considerations in development of the testbed. Integration of HITL elements in the development phases and veri cation/ validation are key elements to this report.
A Heuristic for Improving Legacy Software Quality during Maintenance: An Empirical Case Study
ERIC Educational Resources Information Center
Sale, Michael John
2017-01-01
Many organizations depend on the functionality of mission-critical legacy software and the continued maintenance of this software is vital. Legacy software is defined here as software that contains no testing suite, is often foreign to the developer performing the maintenance, lacks meaningful documentation, and over time, has become difficult to…
Implementation and Simulation Results using Autonomous Aerobraking Development Software
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.
2011-01-01
An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.
WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, S; Kessler, M; Litzenberg, D
2015-06-15
Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those eventsmore » and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from Varian Medical Systems. Other quality projects involving her effort are funded by Blue Cross Blue Shield of Michigan, Breast Cancer Research Foundation, and the NIH.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... requirements or such other requirements as defined and specified by the Secretary of Homeland Security: (1) Is... otherwise cause, for which a SAFETY Act designation has been issued. For purposes of defining a QATT..., engineering services, software development services, software integration services, threat assessments...
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirements or such other requirements as defined and specified by the Secretary of Homeland Security: (1) Is... otherwise cause, for which a SAFETY Act designation has been issued. For purposes of defining a QATT..., engineering services, software development services, software integration services, threat assessments...
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirements or such other requirements as defined and specified by the Secretary of Homeland Security: (1) Is... otherwise cause, for which a SAFETY Act designation has been issued. For purposes of defining a QATT..., engineering services, software development services, software integration services, threat assessments...
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirements or such other requirements as defined and specified by the Secretary of Homeland Security: (1) Is... otherwise cause, for which a SAFETY Act designation has been issued. For purposes of defining a QATT..., engineering services, software development services, software integration services, threat assessments...
Code of Federal Regulations, 2014 CFR
2014-10-01
... requirements or such other requirements as defined and specified by the Secretary of Homeland Security: (1) Is... otherwise cause, for which a SAFETY Act designation has been issued. For purposes of defining a QATT..., engineering services, software development services, software integration services, threat assessments...
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Wind and Temperature Spectrometry of the Upper Atmosphere in Low-Earth Orbit; Health Monitor for Multitasking, Safety-Critical, Real-Time Software; Stereo Imaging Miniature Endoscope; Early Oscillation Detection Technique for Hybrid DC/DC Converters; Parallel Wavefront Analysis for a 4D Interferometer; Schottky Heterodyne Receivers With Full Waveguide Bandwidth; Carbon Nanofiber-Based, High-Frequency, High-Q, Miniaturized Mechanical Resonators; Ultracapacitor-Based Uninterrupted Power Supply System; Coaxial Cables for Martian Extreme Temperature Environments; Using Spare Logic Resources To Create Dynamic Test Points; Autonomous Coordination of Science Observations Using Multiple Spacecraft; Autonomous Phase Retrieval Calibration; EOS MLS Level 1B Data Processing Software, Version 3; Cassini Tour Atlas Automated Generation; Software Development Standard Processes (SDSP); Graphite Composite Panel Polishing Fixture; Material Gradients in Oxygen System Components Improve Safety; Ridge Waveguide Structures in Magnesium-Doped Lithium Niobate; Modifying Matrix Materials to Increase Wetting and Adhesion; Lightweight Magnetic Cooler With a Reversible Circulator; The Invasive Species Forecasting System; Method for Cleanly and Precisely Breaking Off a Rock Core Using a Radial Compressive Force; Praying Mantis Bending Core Breakoff and Retention Mechanism; Scoring Dawg Core Breakoff and Retention Mechanism; Rolling-Tooth Core Breakoff and Retention Mechanism; Vibration Isolation and Stabilization System for Spacecraft Exercise Treadmill Devices; Microgravity-Enhanced Stem Cell Selection; Diagnosis and Treatment of Neurological Disorders by Millimeter-Wave Stimulation; Passive Vaporizing Heat Sink; Remote Sensing and Quantization of Analog Sensors; Phase Retrieval for Radio Telescope and Antenna Control; Helium-Cooled Black Shroud for Subscale Cryogenic Testing; Receive Mode Analysis and Design of Microstrip Reflectarrays; and Chance-Constrained Guidance With Non-Convex Constraints.
Ed Tech Developer's Guide: A Primer for Software Developers, Startups, and Entrepreneurs
ERIC Educational Resources Information Center
Bienkowski, Marie; Gerard, Sarah Nixon; Rubin, Shawn; Sanford, Cathy; Borrelli-Murray, Dana; Driscoll, Tom; Arora, Jessie; Hruska, Mike; Beck, Katie; Murray, Thomas; Hoekstra, Jason; Gannes, Stuart; Metz, Edward; Midgley, Steve; Castilla, Stephanie; Tomassini, Jason; Madda, Mary Jo; Chase, Zac; Martin, Erik; Noel, Marcus; Styles, Kathleen
2015-01-01
Opportunities abound for software designers and developers to create impactful tools for teachers, school leaders, students, and their families. This guide for developers, startups, and entrepreneurs addresses key questions about the education ecosystem and highlights critical needs and opportunities to develop digital tools and apps for learning.…
NASA Technical Reports Server (NTRS)
Marcotte, P. P.; Mathewson, K. J. R.
1982-01-01
The operational safety of six axle locomotives is analyzed. A locomotive model with corresponding data on suspension characteristics, a method of track defect characterization, and a method of characterizing operational safety are used. A user oriented software package was developed as part of the methodology and was used to study the effect (on operational safety) of various locomotive parameters and operational conditions such as speed, tractive effort, and track curvature. The operational safety of three different locomotive designs was investigated.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...
A Hazardous Gas Detection System for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Chen, L. - Y.; Makel, D. B.; Liu, C. C.; Wu, Q. H.; Knight, D.
1998-01-01
The detection of explosive conditions in aerospace propulsion applications is important for safety and economic reasons. Microfabricated hydrogen, oxygen, and hydrocarbon sensors as well as the accompanying hardware and software are being developed for a range of aerospace safety applications. The development of these sensors is being done using MEMS (Micro ElectroMechanical Systems) based technology and SiC-based semiconductor technology. The hardware and software allows control and interrogation of each sensor head and reduces accompanying cabling through multiplexing. These systems are being applied on the X-33 and on an upcoming STS-95 Shuttle mission. A number of commercial applications are also being pursued. It is concluded that this MEMS-based technology has significant potential to reduce costs and increase safety in a variety of aerospace applications.
A Hazardous Gas Detection System for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Chen, L.-Y.; Makel, D. B.; Liu, C. C.; Wu, Q. H.; Knight, D.
1998-01-01
The detection of explosive conditions in aerospace propulsion applications is important for safety and economic reasons. Microfabricated hydrogen, oxygen, and hydrocarbon sensors as well as the accompanying hardware and software are being, developed for a range of aerospace safety applications. The development of these sensors is being done using MEMS (Micro ElectroMechanical Systems) based technology and SiC-based semiconductor technology. The hardware and software allows control and interrocation of each sensor head and reduces accompanying cabling through multiplexing. These systems are being, applied on the X-33 and on an upcoming STS-95 Shuttle mission. A number of commercial applications are also being pursued. It is concluded that this MEMS-based technology has significant potential to reduce costs and increase safety in a variety of aerospace applications.
The VATES-Diamond as a Verifier's Best Friend
NASA Astrophysics Data System (ADS)
Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz
Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.
Space Situational Awareness in the Joint Space Operations Center
NASA Astrophysics Data System (ADS)
Wasson, M.
2011-09-01
Flight safety of orbiting resident space objects is critical to our national interest and defense. United States Strategic Command has assigned the responsibility for Space Situational Awareness (SSA) to its Joint Functional Component Command - Space (JFCC SPACE) at Vandenberg Air Force Base. This paper will describe current SSA imperatives, new developments in SSA tools and developments in Defensive Operations. Current SSA processes are being examined to capture, and possibly improve, tasking of SSN sensors and "new" space-based sensors, "common" conjunction assessment methodology, and SSA sharing due to the growth seen over the last two years. The stand-up of a Defensive Ops Branch will highlight the need for advanced analysis and collaboration across space, weather, intelligence, and cyber specialties. New developments in SSA tools will be a description of computing hardware/software upgrades planned as well as the use of User-Defined Operating Pictures and visualization applications.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Transforming Aggregate Object-Oriented Formal Specifications to Code
1999-03-01
integration issues associated with a formal-based software transformation system, such as the source specification, the problem space architecture , design architecture ... design transforms, and target software transforms. Software is critical in today’s Air Force, yet its specification, design, and development
NASA Technical Reports Server (NTRS)
Dunn, William R.; Corliss, Lloyd D.
1991-01-01
Paper examines issue of software safety. Presents four case histories of software-safety analysis. Concludes that, to be safe, software, for all practical purposes, must be free of errors. Backup systems still needed to prevent catastrophic software failures.
Software for occupational health and safety risk analysis based on a fuzzy model.
Stefanovic, Miladin; Tadic, Danijela; Djapan, Marko; Macuzic, Ivan
2012-01-01
Risk and safety management are very important issues in healthcare systems. Those are complex systems with many entities, hazards and uncertainties. In such an environment, it is very hard to introduce a system for evaluating and simulating significant hazards. In this paper, we analyzed different types of hazards in healthcare systems and we introduced a new fuzzy model for evaluating and ranking hazards. Finally, we presented a developed software solution, based on the suggested fuzzy model for evaluating and monitoring risk.
Intelligent Hardware-Enabled Sensor and Software Safety and Health Management for Autonomous UAS
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Schumann, Johann; Ippolito, Corey
2015-01-01
Unmanned Aerial Systems (UAS) can only be deployed if they can effectively complete their mission and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. We propose to design a real-time, onboard system health management (SHM) capability to continuously monitor essential system components such as sensors, software, and hardware systems for detection and diagnosis of failures and violations of safety or performance rules during the ight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and software signals; (2) signal analysis, preprocessing, and advanced on-the- y temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power hardware realization using Field Programmable Gate Arrays (FPGAs) in order to avoid overburdening limited computing resources or costly re-certi cation of ight software due to instrumentation. No currently available SHM capabilities (or combinations of currently existing SHM capabilities) come anywhere close to satisfying these three criteria yet NASA will require such intelligent, hardwareenabled sensor and software safety and health management for introducing autonomous UAS into the National Airspace System (NAS). We propose a novel approach of creating modular building blocks for combining responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. Our proposed research program includes both developing this novel approach and demonstrating its capabilities using the NASA Swift UAS as a demonstration platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
In this report the Nuclear Criticality Safety Analytical Methods Resource Center describes a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) database at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR database was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE Criticality Safety Analytical Sequences and the codes comprising those sequences. The database includes data for both cross section preparation and criticality safety calculations. 3 refs., 1 tab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
The Nuclear Criticality Safety Analytical Methods Resource Center announces the availability of a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) data base at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR data base was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE criticality safety analytical sequences and the codes comprising those sequences. The data base includes data for both cross-section preparation and criticality safety calculations.
NASA Technical Reports Server (NTRS)
Quintana, Rolando
2003-01-01
The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.
Formal Validation of Aerospace Software
NASA Astrophysics Data System (ADS)
Lesens, David; Moy, Yannick; Kanig, Johannes
2013-08-01
Any single error in critical software can have catastrophic consequences. Even though failures are usually not advertised, some software bugs have become famous, such as the error in the MIM-104 Patriot. For space systems, experience shows that software errors are a serious concern: more than half of all satellite failures from 2000 to 2003 involved software. To address this concern, this paper addresses the use of formal verification of software developed in Ada.
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and software system safety as part of the pre-revenue service testing of the equipment. (d)(1... safely by initiating a full service brake application in the event of a hardware or software failure that... 49 Transportation 4 2010-10-01 2010-10-01 false Train electronic hardware and software safety. 238...
A Case Study of Measuring Process Risk for Early Insights into Software Safety
NASA Technical Reports Server (NTRS)
Layman, Lucas; Basili, Victor; Zelkowitz, Marvin V.; Fisher, Karen L.
2011-01-01
In this case study, we examine software safety risk in three flight hardware systems in NASA's Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TPRM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.
A comprehensive and efficient daily quality assurance for PBS proton therapy
NASA Astrophysics Data System (ADS)
Actis, O.; Meer, D.; König, S.; Weber, D. C.; Mayor, A.
2017-03-01
There are several general recommendations for quality assurance (QA) measures, which have to be performed at proton therapy centres. However, almost each centre uses a different therapy system. In particular, there is no standard procedure for centres employing pencil beam scanning and each centre applies a specific QA program. Gantry 2 is an operating therapy system which was developed at PSI and relies on the most advanced technological innovations. We developed a comprehensive daily QA program in order to verify the main beam characteristics to assure the functionality of the therapy delivery system and the patient safety system. The daily QA program entails new hardware and software solutions for a highly efficient clinical operation. In this paper, we describe a dosimetric phantom used for verifying the most critical beam parameters and the software architecture developed for a fully automated QA procedure. The connection between our QA software and the database allows us to store the data collected on a daily basis and use it for trend analysis over longer periods of time. All the data presented here have been collected during a time span of over two years, since the beginning of the Gantry 2 clinical operation in 2013. Our procedure operates in a stable way and delivers the expected beam quality. The daily QA program takes only 20 min. At the same time, the comprehensive approach allows us to avoid most of the weekly and monthly QA checks and increases the clinical beam availability.
NASA Technical Reports Server (NTRS)
Fitz, Rhonda; Whitman, Gerek
2016-01-01
Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.
SafetyAnalyst : software tools for safety management of specific highway sites
DOT National Transportation Integrated Search
2010-07-01
SafetyAnalyst provides a set of software tools for use by state and local highway agencies for highway safety management. SafetyAnalyst can be used by highway agencies to improve their programming of site-specific highway safety improvements. SafetyA...
Reliability of Beam Loss Monitors System for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2004-11-01
The employment of superconducting magnets in high energy colliders opens challenging failure scenarios and brings new criticalities for the whole system protection. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particle losses, while at medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data have been processed by reliability software (Isograph). The analysis ranges from the components data to the system configuration.
A Virtual Laboratory for Aviation and Airspace Prognostics Research
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Gorospe, George; Teubert, Christ; Quach, Cuong C.; Hogge, Edward; Darafsheh, Kaveh
2017-01-01
Integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, and other aviation technologies, in the airspace is becoming more and more complicated, and will continue to do so in the future. Inclusion of new technology and complexity into the airspace increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems and systems of systems can be challenging, expensive, and at times unsafe when implementing real life scenarios. The application of prognostics to aviation and airspace management may produce new tools and insight into these problems. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. In our research, we develop a live, distributed, hardware- in-the-loop Prognostics Virtual Laboratory testbed for aviation and airspace prognostics. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. In our earlier work1 we discussed the initial Prognostics Virtual Laboratory testbed development work and related results for milestones 1 & 2. This paper describes the design, development, and testing of the integrated tested which are part of milestone 3, along with our next steps for validation of this work. Through a framework consisting of software/hardware modules and associated interface clients, the distributed testbed enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. The testbed modules can be used cohesively to construct complex and relevant airspace scenarios for research. Four modules are key to this research: the virtual aircraft module which uses the X-Plane simulator and X-PlaneConnect toolbox, the live aircraft module which connects fielded aircraft using onboard cellular communications devices, the hardware in the loop (HITL) module which connects laboratory based bench-top hardware testbeds and the research module which contains diagnostics and prognostics tools for analysis of live air traffic situations and vehicle health conditions. The testbed also features other modules for data recording and playback, information visualization, and air traffic generation. Software reliability, safety, and latency are some of the critical design considerations in development of the testbed.
A hazard control system for robot manipulators
NASA Technical Reports Server (NTRS)
Carter, Ruth Chiang; Rad, Adrian
1991-01-01
A robot for space applications will be required to complete a variety of tasks in an uncertain, harsh environment. This fact presents unusual and highly difficult challenges to ensuring the safety of astronauts and keeping the equipment they depend on from becoming damaged. The systematic approach being taken to control hazards that could result from introducing robotics technology in the space environment is described. First, system safety management and engineering principles, techniques, and requirements are discussed as they relate to Shuttle payload design and operation in general. The concepts of hazard, hazard category, and hazard control, as defined by the Shuttle payload safety requirements, is explained. Next, it is shown how these general safety management and engineering principles are being implemented on an actual project. An example is presented of a hazard control system for controlling one of the hazards identified for the Development Test Flight (DTF-1) of NASA's Flight Telerobotic Servicer, a teleoperated space robot. How these schemes can be applied to terrestrial robots is discussed as well. The same software monitoring and control approach will insure the safe operation of a slave manipulator under teleoperated or autonomous control in undersea, nuclear, or manufacturing applications where the manipulator is working in the vicinity of humans or critical hardware.
A method for identifying EMI critical circuits during development of a large C3
NASA Astrophysics Data System (ADS)
Barr, Douglas H.
The circuit analysis methods and process Boeing Aerospace used on a large, ground-based military command, control, and communications (C3) system are described. This analysis was designed to help identify electromagnetic interference (EMI) critical circuits. The methodology used the MIL-E-6051 equipment criticality categories as the basis for defining critical circuits, relational database technology to help sort through and account for all of the approximately 5000 system signal cables, and Macintosh Plus personal computers to predict critical circuits based on safety margin analysis. The EMI circuit analysis process systematically examined all system circuits to identify which ones were likely to be EMI critical. The process used two separate, sequential safety margin analyses to identify critical circuits (conservative safety margin analysis, and detailed safety margin analysis). These analyses used field-to-wire and wire-to-wire coupling models using both worst-case and detailed circuit parameters (physical and electrical) to predict circuit safety margins. This process identified the predicted critical circuits that could then be verified by test.
Java Source Code Analysis for API Migration to Embedded Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Victor; McCoy, James A.; Guerrero, Jonathan
Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less
Autonomous Flight Safety System
NASA Technical Reports Server (NTRS)
Simpson, James
2010-01-01
The Autonomous Flight Safety System (AFSS) is an independent self-contained subsystem mounted onboard a launch vehicle. AFSS has been developed by and is owned by the US Government. Autonomously makes flight termination/destruct decisions using configurable software-based rules implemented on redundant flight processors using data from redundant GPS/IMU navigation sensors. AFSS implements rules determined by the appropriate Range Safety officials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric
Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
Onboard Monitoring and Reporting for Commercial Motor Vehicle Safety Final Report
DOT National Transportation Integrated Search
2008-02-01
This Final Report describes the process and product from the project, Onboard Monitoring and Reporting for Commercial Motor Vehicle Safety (OBMS), in which a prototypical suite of hardware and software on a class 8 truck was developed and tested. The...
A Generic Software Safety Document Generator
NASA Technical Reports Server (NTRS)
Denney, Ewen; Venkatesan, Ram Prasad
2004-01-01
Formal certification is based on the idea that a mathematical proof of some property of a piece of software can be regarded as a certificate of correctness which, in principle, can be subjected to external scrutiny. In practice, however, proofs themselves are unlikely to be of much interest to engineers. Nevertheless, it is possible to use the information obtained from a mathematical analysis of software to produce a detailed textual justification of correctness. In this paper, we describe an approach to generating textual explanations from automatically generated proofs of program safety, where the proofs are of compliance with an explicit safety policy that can be varied. Key to this is tracing proof obligations back to the program, and we describe a tool which implements this to certify code auto-generated by AutoBayes and AutoFilter, program synthesis systems under development at the NASA Ames Research Center. Our approach is a step towards combining formal certification with traditional certification methods.
Software Development Standard Processes (SDSP)
NASA Technical Reports Server (NTRS)
Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.;
2011-01-01
A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.
Space Shuttle Software Development and Certification
NASA Technical Reports Server (NTRS)
Orr, James K.; Henderson, Johnnie A
2000-01-01
Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools
Gade, Anne Lill; Ovrebø, Steinar; Hylland, Ketil
2008-07-01
The goal of REACH is the safe use of chemicals. This study examines the efficiency and usefulness of two draft technical guidance notes in the REACH Interim Project 3.2-2 for the development of the chemical safety report and exposure scenarios. A case study was carried out for a paint system for protection of structural steel. The focuses of the study were risk assessment of preparations based on Derived No Effect Level (DNEL) and Predicted No Effect Concentrations (PNEC) and on effective and accurate communication in the supply chain. Exposure scenarios and generic descriptions of uses, risk management measures, and exposure determinants were developed. The study showed that communication formats, software tools, and guidelines for chemical risk assessment need further adjustment to preparations and real-life situations. Web platforms may simplify such communication. The downstream formulator needs basic substance data from the substance manufacturer during the pre-registration phase to develop exposure scenarios for preparations. Default values need to be communicated in the supply chain because these were critical for the derivation of applicable risk management demands. The current guidelines which rely on the available toxicological knowledge are insufficient to advise downstream users on how to develop exposure scenarios for preparations.
Sweidan, Michelle; Williamson, Margaret; Reeve, James F; Harvey, Ken; O'Neill, Jennifer A; Schattner, Peter; Snowdon, Teri
2010-04-15
Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries.
2010-01-01
Background Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Methods Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. Results A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. Conclusions This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries. PMID:20398294
78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...
NASA Technical Reports Server (NTRS)
Pitman, C. L.; Erb, D. M.; Izygon, M. E.; Fridge, E. M., III; Roush, G. B.; Braley, D. M.; Savely, R. T.
1992-01-01
The United State's big space projects of the next decades, such as Space Station and the Human Exploration Initiative, will need the development of many millions of lines of mission critical software. NASA-Johnson (JSC) is identifying and developing some of the Computer Aided Software Engineering (CASE) technology that NASA will need to build these future software systems. The goal is to improve the quality and the productivity of large software development projects. New trends are outlined in CASE technology and how the Software Technology Branch (STB) at JSC is endeavoring to provide some of these CASE solutions for NASA is described. Key software technology components include knowledge-based systems, software reusability, user interface technology, reengineering environments, management systems for the software development process, software cost models, repository technology, and open, integrated CASE environment frameworks. The paper presents the status and long-term expectations for CASE products. The STB's Reengineering Application Project (REAP), Advanced Software Development Workstation (ASDW) project, and software development cost model (COSTMODL) project are then discussed. Some of the general difficulties of technology transfer are introduced, and a process developed by STB for CASE technology insertion is described.
NASA Technical Reports Server (NTRS)
Goethel, Thomas; Glesner, Sabine
2009-01-01
The correctness of safety-critical embedded software is crucial, whereas non-functional properties like deadlock-freedom and real-time constraints are particularly important. The real-time calculus Timed Communicating Sequential Processes (CSP) is capable of expressing such properties and can therefore be used to verify embedded software. In this paper, we present our formalization of Timed CSP in the Isabelle/HOL theorem prover, which we have formulated as an operational coalgebraic semantics together with bisimulation equivalences and coalgebraic invariants. Furthermore, we apply these techniques in an abstract specification with real-time constraints, which is the basis for current work in which we verify the components of a simple real-time operating system deployed on a satellite.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
NASA Technical Reports Server (NTRS)
Aguilar, Michael L.; Bonanne, Kevin H.; Favretto, Jeffrey A.; Jackson, Maddalena M.; Jones, Stephanie L.; Mackey, Ryan M.; Sarrel, Marc A.; Simpson, Kimberly A.
2014-01-01
The Exploration Systems Development (ESD) Standing Review Board (SRB) requested the NASA Engineering and Safety Center (NESC) conduct an independent review of the plan developed by Ground Systems Development and Operations (GSDO) for identifying models and emulators to create a tool(s) to verify their command and control software. The NESC was requested to identify any issues or weaknesses in the GSDO plan. This document contains the outcome of the NESC review.
Static load simulation of steering knuckle for a formula student race car
NASA Astrophysics Data System (ADS)
Saputro, Bagus Aulia; Ubaidillah, Triono, Dicky Agus; Pratama, Dzaky Roja; Cahyono, Sukmaji Indro; Imaduddin, Fitrian
2018-02-01
This research aims to determine the stress distribution which occurs on the steering knuckle and to define its safety factor number. Steering knuckle is the most critical part of a car's steering system. Steering knuckle supports the tie rod, brake caliper, and the wheels to provide stability. Steering knuckle withstands the load which given on the front wheels and functions as the wheel's axis. Balljoint and king support the rotation of the suspension arm. When the car is in idle position, knuckle hold the weight of the car, it gets braking force when it's braking and cornering. Knuckle is designed to have the strength that could withstand load and to have a good safety factor value. Knuckle is designed using Fusion software then simulated using Fusion simulation software with a static load, moment braking force, and cornering force as the loads in this simulation. The simulation works in ideal condition. The result of this simulation is satisfying. This simulation produces a maximum displacement of 0.01281mm, the maximum shear stress is 3.707 MPa on the stub hole, and the safety factor is 5.24. The material used for this product is mild steel AISI 1018.
DOT National Transportation Integrated Search
2015-04-01
The principal objectives and scope of this project were to provide a software tracking tool to improve : decision-making for highway safety. A literature search revealed that purchasing and customizing : existing software was not feasible and a new s...
Dependability modeling and assessment in UML-based software development.
Bernardi, Simona; Merseguer, José; Petriu, Dorina C
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.
Dependability Modeling and Assessment in UML-Based Software Development
Bernardi, Simona; Merseguer, José; Petriu, Dorina C.
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428
A performance improvement plan to increase nurse adherence to use of medication safety software.
Gavriloff, Carrie
2012-08-01
Nurses can protect patients receiving intravenous (IV) medication by using medication safety software to program "smart" pumps to administer IV medications. After a patient safety event identified inconsistent use of medication safety software by nurses, a performance improvement team implemented the Deming Cycle performance improvement methodology. The combined use of improved direct care nurse communication, programming strategies, staff education, medication safety champions, adherence monitoring, and technology acquisition resulted in a statistically significant (p < .001) increase in nurse adherence to using medication safety software from 28% to above 85%, exceeding national benchmark adherence rates (Cohen, Cooke, Husch & Woodley, 2007; Carefusion, 2011). Copyright © 2012 Elsevier Inc. All rights reserved.
The Federal Aviation Administration Plan for Research, Engineering and Development, 1994
1994-05-01
Aeronautical Data Link Communications and (COTS) runway incursion system software will Applications, and 051-130 Airport Safety be demonstrated as a... airport departure and ar- efforts rival scheduling plans that optimize daily traffic flows for long-range flights between major city- * OTFP System to...Expanded HARS planning capabilities to in- aviation dispatchers to develop optimized high clude enhanced communications software for altitude flight
Frazzoli, Chiara; Petrini, Carlo; Mantovani, Alberto
2009-01-01
Development is defined sustainable when it meets the needs of the present without compromising the ability of future generations to meet their own needs. Pivoting on social, environmental and economic aspects of food chain sustainability, this paper presents the concept of sustainable food safety based on the prevention of risks and burden of poor health for generations to come. Under this respect, the assessment of long-term, transgenerational risks is still hampered by serious scientific uncertainties. Critical issues to the development of a sustainable food safety framework may include: endocrine disrupters as emerging contaminants that specifically target developing organisms; toxicological risks assessment in Countries at the turning point of development; translating knowledge into toxicity indexes to support risk management approaches, such as hazard analysis and critical control points (HACCP); the interplay between chemical hazards and social determinants. Efforts towards the comprehensive knowledge and management of key factors of sustainable food safety appear critical to the effectiveness of the overall sustainability policies.
Overview of Risk Mitigation for Safety-Critical Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.
Formal Verification Toolkit for Requirements and Early Design Stages
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Miller, Sheena Judson
2011-01-01
Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.
An overview of the V&V of Flight-Critical Systems effort at NASA
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2011-01-01
As the US is getting ready for the Next Generation (NextGen) of Air Traffic System, there is a growing concern that the current techniques for verification and validation will not be adequate for the changes to come. The JPDO (in charge of implementing NextGen) has given NASA a mandate to address the problem and it resulted in the formulation of the V&V of Flight-Critical Systems effort. This research effort is divided into four themes: argument-based safety assurance, distributed systems, authority and autonomy, and, software intensive systems. This paper presents an overview of the technologies that will address the problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soubies, B.; Henry, J.Y.; Le Meur, M.
1300 MWe pressurised water reactors (PWRs), like the 1400 MWe reactors, operate with microprocessor-based safety systems. This is particularly the case for the Digital Integrated Protection System (SPIN), which trips the reactor in an emergency and sets in action the safeguard functions. The softwares used in these systems must therefore be highly dependable in the execution of their functions. In the case of SPIN, three players are working at different levels to achieve this goal: the protection system manufacturer, Merlin Gerin; the designer of the nuclear steam supply system, Framatome; the operator of the nuclear power plants, Electricite de Francemore » (EDF), which is also responsible for the safety of its installations. Regulatory licenses are issued by the French safety authority, the Nuclear Installations Safety Directorate (French abbreviation DSIN), subsequent to a successful examination of the technical provisions adopted by the operator. This examination is carried out by the IPSN and the standing group on nuclear reactors. This communication sets out: the methods used by the manufacturer to develop SPIN software for the 1400 MWe PWRs (N4 series); the approach adopted by the IPSN to evaluate the safety software of the protection system for the N4 series of reactors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Chen, Liangzhe; Duan, Sisi
Abstract Critical Infrastructures (CIs) such as energy, water, and transportation are complex networks that are crucial for sustaining day-to-day commodity flows vital to national security, economic stability, and public safety. The nature of these CIs is such that failures caused by an extreme weather event or a man-made incident can trigger widespread cascading failures, sending ripple effects at regional or even national scales. To minimize such effects, it is critical for emergency responders to identify existing or potential vulnerabilities within CIs during such stressor events in a systematic and quantifiable manner and take appropriate mitigating actions. We present here amore » novel critical infrastructure monitoring and analysis system named URBAN-NET. The system includes a software stack and tools for monitoring CIs, pre-processing data, interconnecting multiple CI datasets as a heterogeneous network, identifying vulnerabilities through graph-based topological analysis, and predicting consequences based on what-if simulations along with visualization. As a proof-of-concept, we present several case studies to show the capabilities of our system. We also discuss remaining challenges and future work.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Requirement Specifications for Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission... issuing a revised regulatory guide (RG), revision 1 of RG 1.172, ``Software Requirement Specifications for...
NASA Technical Reports Server (NTRS)
Bekele, Gete
2002-01-01
This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.
Accelerating NASA GN&C Flight Software Development
NASA Technical Reports Server (NTRS)
Tamblyn, Scott; Henry, Joel; Rapp, John
2010-01-01
When the guidance, navigation, and control (GN&C) system for the Orion crew vehicle undergoes Critical Design Review (CDR), more than 90% of the flight software will already be developed - a first for NASA on a project of this scope and complexity. This achievement is due in large part to a new development approach using Model-Based Design.
Perspectives on NASA flight software development - Apollo, Shuttle, Space Station
NASA Technical Reports Server (NTRS)
Garman, John R.
1990-01-01
Flight data systems' software development is chronicled for the period encompassing NASA's Apollo, Space Shuttle, and (ongoing) Space Station Freedom programs, with attention to the methodologies and 'development tools' employed in each case and their mutual relationships. A dominant concern in all three programs has been the accommodation of software change; it has also been noted that any such long-term program carries the additional challenge of identifying which elements of its software-related 'institutional memory' are most critical, in order to preclude their loss through the retirement, promotion, or transfer of its 'last expert'.
Designing an architectural style for Pervasive Healthcare systems.
Rafe, Vahid; Hajvali, Masoumeh
2013-04-01
Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2014-01-01
Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.
NASA Technical Reports Server (NTRS)
Wieseman, Carol D.; Christhilf, David; Perry, Boyd, III
2012-01-01
An important objective of the Semi-Span Super-Sonic Transport (S4T) wind tunnel model program was the demonstration of Flutter Suppression (FS), Gust Load Alleviation (GLA), and Ride Quality Enhancement (RQE). It was critical to evaluate the stability and robustness of these control laws analytically before testing them and experimentally while testing them to ensure safety of the model and the wind tunnel. MATLAB based software was applied to evaluate the performance of closed-loop systems in terms of stability and robustness. Existing software tools were extended to use analytical representations of the S4T and the control laws to analyze and evaluate the control laws prior to testing. Lessons were learned about the complex windtunnel model and experimental testing. The open-loop flutter boundary was determined from the closed-loop systems. A MATLAB/Simulink Simulation developed under the program is available for future work to improve the CPE process. This paper is one of a series of that comprise a special session, which summarizes the S4T wind-tunnel program.
Experience report: Using formal methods for requirements analysis of critical spacecraft software
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.; Ampo, Yoko
1994-01-01
Formal specification and analysis of requirements continues to gain support as a method for producing more reliable software. However, the introduction of formal methods to a large software project is difficult, due in part to the unfamiliarity of the specification languages and the lack of graphics. This paper reports results of an investigation into the effectiveness of formal methods as an aid to the requirements analysis of critical, system-level fault-protection software on a spacecraft currently under development. Our experience indicates that formal specification and analysis can enhance the accuracy of the requirements and add assurance prior to design development in this domain. The work described here is part of a larger, NASA-funded research project whose purpose is to use formal-methods techniques to improve the quality of software in space applications. The demonstration project described here is part of the effort to evaluate experimentally the effectiveness of supplementing traditional engineering approaches to requirements specification with the more rigorous specification and analysis available with formal methods.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
FY 2002 Report on Software Visualization Techniques for IV and V
NASA Technical Reports Server (NTRS)
Fotta, Michael E.
2002-01-01
One of the major challenges software engineers often face in performing IV&V is developing an understanding of a system created by a development team they have not been part of. As budgets shrink and software increases in complexity, this challenge will become even greater as these software engineers face increased time and resource constraints. This research will determine which current aspects of providing this understanding (e.g., code inspections, use of control graphs, use of adjacency matrices, requirements traceability) are critical to the performing IV&V and amenable to visualization techniques. We will then develop state-of-the-art software visualization techniques to facilitate the use of these aspects to understand software and perform IV&V.
NASA Software Engineering Benchmarking Study
NASA Technical Reports Server (NTRS)
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5.onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.
NASA Astrophysics Data System (ADS)
Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel
2013-09-01
Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.
Reference Avionics Architecture for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.
2010-01-01
Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.
NASA Astrophysics Data System (ADS)
Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun
2007-04-01
Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.
Validation of Safety-Critical Systems for Aircraft Loss-of-Control Prevention and Recovery
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
2012-01-01
Validation of technologies developed for loss of control (LOC) prevention and recovery poses significant challenges. Aircraft LOC can result from a wide spectrum of hazards, often occurring in combination, which cannot be fully replicated during evaluation. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of hazardous and uncertain conditions, and the validation framework must provide some measure of assurance that the new vehicle safety technologies do no harm (i.e., that they themselves do not introduce new safety risks). This paper summarizes a proposed validation framework for safety-critical systems, provides an overview of validation methods and tools developed by NASA to date within the Vehicle Systems Safety Project, and develops a preliminary set of test scenarios for the validation of technologies for LOC prevention and recovery
Putting the Power of Configuration in the Hands of the Users
NASA Technical Reports Server (NTRS)
Al-Shihabi, Mary-Jo; Brown, Mark; Rigolini, Marianne
2011-01-01
Goal was to reduce the overall cost of human space flight while maintaining the most demanding standards for safety and mission success. In support of this goal, a project team was chartered to replace 18 legacy Space Shuttle nonconformance processes and systems with one fully integrated system Problem Reporting and Corrective Action (PRACA) processes provide a closed-loop system for the identification, disposition, resolution, closure, and reporting of all Space Shuttle hardware/software problems PRACA processes are integrated throughout the Space Shuttle organizational processes and are critical to assuring a safe and successful program Primary Project Objectives Develop a fully integrated system that provides an automated workflow with electronic signatures Support multiple NASA programs and contracts with a single "system" architecture Define standard processes, implement best practices, and minimize process variations
2016-10-27
Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 © 2016 Carnegie Mellon University [DISTRIBUTION STATEMENT A: This... Carnegie Mellon University [DISTRIBUTION STATEMENT A: This material has been approved for public release and unlimited distribution] Copyright 2016 Carnegie ... Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by
Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2009-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Federal Aviation Administration Plan for Research, Engineering and Development 1993
1994-02-01
pace the United States economy. With no additional with technology, and help maintain economic major airports planned in the near term, the FAA growth...provides Route Software Development, 62-20 Terminal ARTCC and TRACON controllers with automa- ATC Automation (TATCA), 62-21 Airport Sur- tion aids for...Applications, and 051-130 Airport Safety (COTS) runway incursion system software will Technology. Capital Investment Plan projects: be demonstrated
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, K; Curran, B
I. Information Security Background (Speaker = Kevin McDonald) Evolution of Medical Devices Living and Working in a Hostile Environment Attack Motivations Attack Vectors Simple Safety Strategies Medical Device Security in the News Medical Devices and Vendors Summary II. Keeping Radiation Oncology IT Systems Secure (Speaker = Bruce Curran) Hardware Security Double-lock Requirements “Foreign” computer systems Portable Device Encryption Patient Data Storage System Requirements Network Configuration Isolating Critical Devices Isolating Clinical Networks Remote Access Considerations Software Applications / Configuration Passwords / Screen Savers Restricted Services / access Software Configuration Restriction Use of DNS to restrict accesse. Patches / Upgrades Awareness Intrusionmore » Prevention Intrusion Detection Threat Risk Analysis Conclusion Learning Objectives: Understanding how Hospital IT Requirements affect Radiation Oncology IT Systems. Illustrating sample practices for hardware, network, and software security. Discussing implementation of good IT security practices in radiation oncology. Understand overall risk and threats scenario in a networked environment.« less
A new approach for instrument software at Gemini
NASA Astrophysics Data System (ADS)
Gillies, Kim; Nunez, Arturo; Dunn, Jennifer
2008-07-01
Gemini Observatory is now developing its next generation of astronomical instruments, the Aspen instruments. These new instruments are sophisticated and costly requiring large distributed, collaborative teams. Instrument software groups often include experienced team members with existing mature code. Gemini has taken its experience from the previous generation of instruments and current hardware and software technology to create an approach for developing instrument software that takes advantage of the strengths of our instrument builders and our own operations needs. This paper describes this new software approach that couples a lightweight infrastructure and software library with aspects of modern agile software development. The Gemini Planet Imager instrument project, which is currently approaching its critical design review, is used to demonstrate aspects of this approach. New facilities under development will face similar issues in the future, and the approach presented here can be applied to other projects.
A research program in empirical computer science
NASA Technical Reports Server (NTRS)
Knight, J. C.
1991-01-01
During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.
Scalable collaborative risk management technology for complex critical systems
NASA Technical Reports Server (NTRS)
Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.
2004-01-01
We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.
Virtual reality for mine safety training.
Filigenzi, M T; Orr, T J; Ruff, T M
2000-06-01
Mining has long remained one of America's most hazardous occupations. Researchers believe that by developing realistic, affordable VR training software, miners will be able to receive accurate training in hazard recognition and avoidance. In addition, the VR software will allow miners to follow mine evacuation routes and safe procedures without exposing themselves to danger. This VR software may ultimately be tailored to provide training in other industries, such as the construction, agricultural, and petroleum industries.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.
Vallejo-Gutiérrez, Paula; Bañeres-Amella, Joaquim; Sierra, Eduardo; Casal, Jesús; Agra, Yolanda
2014-01-01
To describe the development process and characteristics of a patient safety incidents reporting system to be implemented in the Spanish National Health System, based on the context and the needs of the different stakeholders. Literature review and analysis of most relevant reporting systems, identification of more than 100 stakeholder's (patients, professionals, regional governments representatives) expectations and requirements, analysis of the legal context, consensus of taxonomy, development of the software and pilot test. Patient Safety Events Reporting and Learning system (Sistema de Notificación y Aprendizajepara la Seguridad del Paciente, SiNASP) is a generic reporting system for all types of incidents related to patient safety, voluntary, confidential, non punitive, anonymous or nominative with anonimization, system oriented, with local analysis of cases and based on the WHO International Classification for Patient Safety. The electronic program has an on-line form for reporting, a software to manage the incidents and improvement plans, and a scoreboard with process indicators to monitor the system. The reporting system has been designed to respond to the needs and expectations identified by the stakeholders, taking into account the lessons learned from the previous notification systems, the characteristics of the National Health System and the existing legal context. The development process presented and the characteristics of the system provide a comprehensive framework that can be used for future deployments of similar patient safety systems. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.
Treatment delivery software for a new clinical grade ultrasound system for thermoradiotherapy.
Novák, Petr; Moros, Eduardo G; Straube, William L; Myerson, Robert J
2005-11-01
A detailed description of a clinical grade Scanning Ultrasound Reflector Linear Array System (SURLAS) applicator was given in a previous paper [Med. Phys. 32, 230-240 (2005)]. In this paper we concentrate on the design, development, and testing of the personal computer (PC) based treatment delivery software that runs the therapy system. The SURLAS requires the coordinated interaction between the therapy applicator and several peripheral devices for its proper and safe operation. One of the most important tasks was the coordination of the input power sequences for the elements of two parallel opposed ultrasound arrays (eight 1.5 cm x 2 cm elements/array, array 1 and 2 operate at 1.9 and 4.9 MHz, respectively) in coordination with the position of a dual-face scanning acoustic reflector. To achieve this, the treatment delivery software can divide the applicator's treatment window in up to 64 sectors (minimum size of 2 cm x 2 cm), and control the power to each sector independently by adjusting the power output levels from the channels of a 16-channel radio-frequency generator. The software coordinates the generator outputs with the position of the reflector as it scans back and forth between the arrays. Individual sector control and dual frequency operation allows the SURLAS to adjust power deposition in three dimensions to superficial targets coupled to its treatment window. The treatment delivery software also monitors and logs several parameters such as temperatures acquired using a 16-channel thermocouple thermometry unit. Safety (in particular to patients) was the paramount concern and design criterion. Failure mode and effects analysis (FMEA) was applied to the applicator as well as to the entire therapy system in order to identify safety issues and rank their relative importance. This analysis led to the implementation of several safety mechanisms and a software structure where each device communicates with the controlling PC independently of the others. In case of a malfunction in any part of the system or a violation of a user-defined safety criterion based on temperature readings, the software terminates treatment immediately and the user is notified. The software development process consisting of problem analysis, design, implementation, and testing is presented in this paper. Once the software was finished and integrated with the hardware, the therapy system was extensively tested. Results demonstrated that the software operates the SURLAS as intended with minimum risk to future patients.
Nuclear Data Activities in Support of the DOE Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Westfall, R. M.; McKnight, R. D.
2005-05-01
The DOE Nuclear Criticality Safety Program (NCSP) provides the technical infrastructure maintenance for those technologies applied in the evaluation and performance of safe fissionable-material operations in the DOE complex. These technologies include an Analytical Methods element for neutron transport as well as the development of sensitivity/uncertainty methods, the performance of Critical Experiments, evaluation and qualification of experiments as Benchmarks, and a comprehensive Nuclear Data program coordinated by the NCSP Nuclear Data Advisory Group (NDAG). The NDAG gathers and evaluates differential and integral nuclear data, identifies deficiencies, and recommends priorities on meeting DOE criticality safety needs to the NCSP Criticality Safety Support Group (CSSG). Then the NDAG identifies the required resources and unique capabilities for meeting these needs, not only for performing measurements but also for data evaluation with nuclear model codes as well as for data processing for criticality safety applications. The NDAG coordinates effort with the leadership of the National Nuclear Data Center, the Cross Section Evaluation Working Group (CSEWG), and the Working Party on International Evaluation Cooperation (WPEC) of the OECD/NEA Nuclear Science Committee. The overall objective is to expedite the issuance of new data and methods to the DOE criticality safety user. This paper describes these activities in detail, with examples based upon special studies being performed in support of criticality safety for a variety of DOE operations.
Software engineering processes for Class D missions
NASA Astrophysics Data System (ADS)
Killough, Ronnie; Rose, Debi
2013-09-01
Software engineering processes are often seen as anathemas; thoughts of CMMI key process areas and NPR 7150.2A compliance matrices can motivate a software developer to consider other career fields. However, with adequate definition, common-sense application, and an appropriate level of built-in flexibility, software engineering processes provide a critical framework in which to conduct a successful software development project. One problem is that current models seem to be built around an underlying assumption of "bigness," and assume that all elements of the process are applicable to all software projects regardless of size and tolerance for risk. This is best illustrated in NASA's NPR 7150.2A in which, aside from some special provisions for manned missions, the software processes are to be applied based solely on the criticality of the software to the mission, completely agnostic of the mission class itself. That is, the processes applicable to a Class A mission (high priority, very low risk tolerance, very high national significance) are precisely the same as those applicable to a Class D mission (low priority, high risk tolerance, low national significance). This paper will propose changes to NPR 7150.2A, taking mission class into consideration, and discuss how some of these changes are being piloted for a current Class D mission—the Cyclone Global Navigation Satellite System (CYGNSS).
Ramírez-Fernández, Cristina; Morán, Alberto L; García-Canseco, Eloísa; Gómez-Montalvo, Jorge R
2017-03-23
1) To enhance the content of an ontology for designing virtual environments (VEs) for upper limb motor rehabilitation of stroke patients according to the suggestions and comments of rehabilitation specialists and software developers, 2) to characterize the perceived importance level of the ontology, 3) to determine the perceived usefulness of the ontology, and 4) to identify the safety characteristics of the ontology for VEs design according to the rehabilitation specialists. Using two semi-structured Web questionnaires, we asked six rehabilitation specialists and six software developers to provide us with their perception regarding the level of importance and the usability of the ontology. From their responses we have identified themes related to perceived and required safety characteristics of the ontology. Significant differences in the importance level were obtained for the Stroke Disability, VE Configuration, Outcome Measures, and Safety Calibration classes, which were perceived as highly important by rehabilitation specialists. Regarding usability, the ontology was perceived by both groups with high usefulness, ease of use, learnability and intention of use. Concerning the thematic analysis of recommendations, eight topics for safety characteristics of the ontology were identified: adjustment of therapy strategies; selection and delimitation of movements; selection and proper calibration of the interaction device; proper selection of measuring instruments; gradual modification of the difficulty of the exercise; adaptability and variability of therapy exercises; feedback according to the capabilities of the patient; and real-time support for exercise training. The rehabilitation specialists and software developers confirmed the importance of the information contained in the ontology regarding motor rehabilitation of the upper limb. Their recommendations highlight the safety features and the advantages of the ontology as a guide for the effective design of VEs.
Reliability of Beam Loss Monitor Systems for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2005-06-01
The increase of beam energy and beam intensity, together with the use of super conducting magnets, opens new failure scenarios and brings new criticalities for the whole accelerator protection system. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system, and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particles losses at 7 TeV and assisted by the Fast Beam Current Decay Monitors at 450 GeV. At medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data has been processed by reliability software (Isograph). The analysis spaces from the components data to the system configuration.
Will They Report It? Ethical Attitude of Graduate Software Engineers in Reporting Bad News
ERIC Educational Resources Information Center
Sajeev, A. S. M.; Crnkovic, Ivica
2012-01-01
Hiding critical information has resulted in disastrous failures of some major software projects. This paper investigates, using a subset of Keil's test, how graduates (70% of them with work experience) from different cultural backgrounds who are enrolled in a postgraduate course on global software development would handle negative information that…
Software Build and Delivery Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
2016-07-10
This presentation deals with the hierarchy of software build and delivery systems. One of the goals is to maximize the success rate of new users and developers when first trying your software. First impressions are important. Early successes are important. This also reduces critical documentation costs. This is a presentation focused on computer science and goes into detail about code documentation.
Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay
2011-07-15
Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less
Report of the Panel on Computer and Information Technology
NASA Technical Reports Server (NTRS)
Lundstrom, Stephen F.; Larsen, Ronald L.
1984-01-01
Aircraft have become more and more dependent on computers (information processing) for improved performance and safety. It is clear that this activity will grow, since information processing technology has advanced by a factor of 10 every 5 years for the past 35 years and will continue to do so. Breakthroughs in device technology, from vacuum tubes through transistors to integrated circuits, contribute to this rapid pace. This progress is nearly matched by similar, though not as dramatic, advances in numerical software and algorithms. Progress has not been easy. Many technical and nontechnical challenges were surmounted. The outlook is for continued growth in capability but will require surmounting new challenges. The technology forecast presented in this report has been developed by extrapolating current trends and assessing the possibilities of several high-risk research topics. In the process, critical problem areas that require research and development emphasis have been identified. The outlook assumes a positive perspective; the projected capabilities are possible by the year 2000, and adequate resources will be made available to achieve them. Computer and information technology forecasts and the potential impacts of this technology on aeronautics are identified. Critical issues and technical challenges underlying the achievement of forecasted performance and benefits are addressed.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Schumann, Johann; Guenther, Kurt; Bosworth, John
2006-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable autonomous flight control and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments [1-2]. At the present time, however, it is unknown how adaptive algorithms can be routinely verified, validated, and certified for use in safety-critical applications. Rigorous methods for adaptive software verification end validation must be developed to ensure that. the control software functions as required and is highly safe and reliable. A large gap appears to exist between the point at which control system designers feel the verification process is complete, and when FAA certification officials agree it is complete. Certification of adaptive flight control software verification is complicated by the use of learning algorithms (e.g., neural networks) and degrees of system non-determinism. Of course, analytical efforts must be made in the verification process to place guarantees on learning algorithm stability, rate of convergence, and convergence accuracy. However, to satisfy FAA certification requirements, it must be demonstrated that the adaptive flight control system is also able to fail and still allow the aircraft to be flown safely or to land, while at the same time providing a means of crew notification of the (impending) failure. It was for this purpose that the NASA Ames Confidence Tool was developed [3]. This paper presents the Confidence Tool as a means of providing in-flight software assurance monitoring of an adaptive flight control system. The paper will present the data obtained from flight testing the tool on a specially modified F-15 aircraft designed to simulate loss of flight control faces.
Implementing electronic handover: interventions to improve efficiency, safety and sustainability.
Alhamid, Sharifah Munirah; Lee, Desmond Xue-Yuan; Wong, Hei Man; Chuah, Matthew Bingfeng; Wong, Yu Jun; Narasimhalu, Kaavya; Tan, Thuan Tong; Low, Su Ying
2016-10-01
Effective handovers are critical for patient care and safety. Electronic handover tools are increasingly used today to provide an effective and standardized platform for information exchange. The implementation of an electronic handover system in tertiary hospitals can be a major challenge. Previous efforts in implementing an electronic handover tool failed due to poor compliance and buy-in from end-users. A new electronic handover tool was developed and incorporated into the existing electronic medical records (EMRs) for medical patients in Singapore General Hospital (SGH). There was poor compliance by on-call doctors in acknowledging electronic handovers, and lack of adherence to safety rules, raising concerns about the safety and efficiency of the electronic handover tool. Urgent measures were needed to ensure its safe and sustained use. A quality improvement group comprising stakeholders, including end-users, developed multi-faceted interventions using rapid PDSA (P-Plan, D-Do, S-Study, A-Act ) cycles to address these issues. Innovative solutions using media and online software provided cost-efficient measures to improve compliance. The percentage of unacknowledged handovers per day was used as the main outcome measure throughout all PDSA cycles. Doctors were also assessed for improvement in their knowledge of safety rules and their perception of the electronic handover tool. An electronic handover tool complementing daily clinical practice can be successfully implemented using solutions devised through close collaboration with end-users supported by the senior leadership. A combined 'bottom-up' and 'top-down' approach with regular process evaluations is crucial for its long-term sustainability. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
ORAM-SENTINEL{trademark} demonstration at Fitzpatrick. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, L.K.; Anderson, V.M.; Mohammadi, K.
1998-06-01
New York Power Authority, in cooperation with EPRI, installed the ORAM-SENTINEL{trademark} software at James A. Fitzpatrick (JAF) Nuclear Power Plant. This software incorporates models of safety systems and support systems that are used for defense-in-depth in the plant during outage and on-line periods. A secondary goal was to include some pre-analyzed risk results to validate the methodology for quantitative assessment of the plant risks during proposed on-line maintenance. During the past year, New York Power Authority personnel have become familiar with the formal computerized Safety Assessment process associated with on-line and outage maintenance. The report describes techniques and lessons learnedmore » during development of the ORAM-SENTINEL model at JAF. It overviews the systems important to the Safety Function Assessment Process and provides details on development of the Plant Transient Assessment process using the station emergency operating procedures. The assessment results are displayed by color (green, yellow, orange, red) to show decreasing safety conditions. The report describes use of the JAF Probabilistic Safety Assessment within the ORAM-SENTINEL code to calculate an instantaneous core damage frequency and the criteria by which this frequency is translated to a color indicator.« less
Means of escape provisions and evacuation simulation of public building in Malaysia and Singapore
NASA Astrophysics Data System (ADS)
Samad, Muna Hanim Abdul; Taib, Nooriati; Ying, Choo Siew
2017-10-01
The Uniform Building By-law 1984 of Malaysia is the legal document governing fire safety requirements in buildings. Its prescriptive nature has made the requirements out dated from the viewpoint of current performance based approach in most developed countries. The means of escape provisions is a critical requirement to safeguard occupants' safety in fire especially in public buildings. As stipulated in the UBBL 1984, the means of escape provisions includes sufficient escape routes, travel distance, protection of escape routes, etc. designated as means to allow occupants to escape within a safe period of time. This research aims at investigating the effectiveness of those provisions in public buildings during evacuation process involving massive crowd during emergencies. This research includes a scenario-based study on evacuation processes using two software i.e. PyroSim, a crowd modelling software to conduct smoke study and Pathfinder to stimulate evacuation model of building in Malaysia and Singapore as comparative study. The results show that the buildings used as case study were designed according to Malaysian UBBL 1984 and Singapore Firecode, 2013 respectively provide relative safe means of escape. The simulations of fire and smoke and coupled with simulation of evacuation have demonstrated that although there are adequate exits designated according to fire requirements, the impact of the geometry of atriums on the behavior of fire and smoke have significant effect on escape time especially for unfamiliar user of the premises.
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Ruiz-Cortes, Antonio
2006-01-01
The field of Software Product Lines (SPL) emphasizes building a core architecture for a family of software products from which concrete products can be derived rapidly. This helps to reduce time-to-market, costs, etc., and can result in improved software quality and safety. Current AOSE methodologies are concerned with developing a single Multiagent System. We propose an initial approach to developing the core architecture of a Multiagent Systems Product Line (MAS-PL), exemplifying our approach with reference to a concept NASA mission based on multiagent technology.
NASA Astrophysics Data System (ADS)
Song, Young-Gi; Seol, Kyung-Tae; Jang, Ji-Ho; Kwon, Hyeok-Jung; Cho, Yong-Sub
2012-07-01
The Proton Engineering Frontier Project (PEFP) 20-MeV proton linear accelerator is currently operating at the Korea Atomic Energy Research Institute (KAERI). The ion source of the 100-MeV proton linac needs at least a 100-hour operation time. To meet the goal, we have developed a microwave ion source that uses no filament. For the ion source, a remote control system has been developed by using experimental physics and the industrial control system (EPICS) software framework. The control system consists of a versa module europa (VME) and EPICS-based embedded applications running on a VxWorks real-time operating system. The main purpose of the control system is to control and monitor the operational variables of the components remotely and to protect operators from radiation exposure and the components from critical problems during beam extraction. We successfully performed the operation test of the control system to confirm the degree of safety during the hardware performance.
UAS-Systems Integration, Validation, and Diagnostics Simulation Capability
NASA Technical Reports Server (NTRS)
Buttrill, Catherine W.; Verstynen, Harry A.
2014-01-01
As part of the Phase 1 efforts of NASA's UAS-in-the-NAS Project a task was initiated to explore the merits of developing a system simulation capability for UAS to address airworthiness certification requirements. The core of the capability would be a software representation of an unmanned vehicle, including all of the relevant avionics and flight control system components. The specific system elements could be replaced with hardware representations to provide Hardware-in-the-Loop (HWITL) test and evaluation capability. The UAS Systems Integration and Validation Laboratory (UAS-SIVL) was created to provide a UAS-systems integration, validation, and diagnostics hardware-in-the-loop simulation capability. This paper discusses how SIVL provides a robust and flexible simulation framework that permits the study of failure modes, effects, propagation paths, criticality, and mitigation strategies to help develop safety, reliability, and design data that can assist with the development of certification standards, means of compliance, and design best practices for civil UAS.
An Illumination Modeling System for Human Factors Analyses
NASA Technical Reports Server (NTRS)
Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)
2002-01-01
Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.
Validation of highly reliable, real-time knowledge-based systems
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1988-01-01
Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.
Virtual test: A student-centered software to measure student's critical thinking on human disease
NASA Astrophysics Data System (ADS)
Rusyati, Lilit; Firman, Harry
2016-02-01
The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.
Some Challenges in the Design of Human-Automation Interaction for Safety-Critical Systems
NASA Technical Reports Server (NTRS)
Feary, Michael S.; Roth, Emilie
2014-01-01
Increasing amounts of automation are being introduced to safety-critical domains. While the introduction of automation has led to an overall increase in reliability and improved safety, it has also introduced a class of failure modes, and new challenges in risk assessment for the new systems, particularly in the assessment of rare events resulting from complex inter-related factors. Designing successful human-automation systems is challenging, and the challenges go beyond good interface development (e.g., Roth, Malin, & Schreckenghost 1997; Christoffersen & Woods, 2002). Human-automation design is particularly challenging when the underlying automation technology generates behavior that is difficult for the user to anticipate or understand. These challenges have been recognized in several safety-critical domains, and have resulted in increased efforts to develop training, procedures, regulations and guidance material (CAST, 2008, IAEA, 2001, FAA, 2013, ICAO, 2012). This paper points to the continuing need for new methods to describe and characterize the operational environment within which new automation concepts are being presented. We will describe challenges to the successful development and evaluation of human-automation systems in safety-critical domains, and describe some approaches that could be used to address these challenges. We will draw from experience with the aviation, spaceflight and nuclear power domains.
Performance of Compiler-Assisted Memory Safety Checking
2014-08-01
software developer has in mind a particular object to which the pointer should point, the intended referent. A memory access error occurs when an ac...Performance of Compiler-Assisted Memory Safety Checking David Keaton Robert C. Seacord August 2014 TECHNICAL NOTE CMU/SEI-2014-TN...based memory safety checking tool and the performance that can be achieved with two such tools whose source code is freely available. The note then
Haase, Rocco; Wunderlich, Maria; Dillenseger, Anja; Kern, Raimar; Akgün, Katja; Ziemssen, Tjalf
2018-04-01
For safety evaluation, randomized controlled trials (RCTs) are not fully able to identify rare adverse events. The richest source of safety data lies in the post-marketing phase. Real-world evidence (RWE) and observational studies are becoming increasingly popular because they reflect usefulness of drugs in real life and have the ability to discover uncommon or rare adverse drug reactions. Areas covered: Adding the documentation of psychological symptoms and other medical disciplines, the necessity for a complex documentation becomes apparent. The collection of high-quality data sets in clinical practice requires the use of special documentation software as the quality of data in RWE studies can be an issue in contrast to the data obtained from RCTs. The MSDS3D software combines documentation of patient data with patient management of patients with multiple sclerosis. Following a continuous development over several treatment-specific modules, we improved and expanded the realization of safety management in MSDS3D with regard to the characteristics of different treatments and populations. Expert opinion: eHealth-enhanced post-authorisation safety study may complete the fundamental quest of RWE for individually improved treatment decisions and balanced therapeutic risk assessment. MSDS3D is carefully designed to contribute to every single objective in this process.
Specification-based software sizing: An empirical investigation of function metrics
NASA Technical Reports Server (NTRS)
Jeffery, Ross; Stathis, John
1993-01-01
For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.
ERIC Educational Resources Information Center
Acharya, Sushil; Manohar, Priyadarshan Anant; Wu, Peter; Maxim, Bruce; Hansen, Mary
2018-01-01
Active learning tools are critical in imparting real world experiences to the students within a classroom environment. This is important because graduates are expected to develop software that meets rigorous quality standards in functional and application domains with little to no training. However, there is a well-recognized need for the…
FY2017 Updates to the SAS4A/SASSYS-1 Safety Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanning, T. H.
The SAS4A/SASSYS-1 safety analysis software is used to perform deterministic analysis of anticipated events as well as design-basis and beyond-design-basis accidents for advanced fast reactors. It plays a central role in the analysis of U.S. DOE conceptual designs, proposed test and demonstration reactors, and in domestic and international collaborations. This report summarizes the code development activities that have taken place during FY2017. Extensions to the void and cladding reactivity feedback models have been implemented, and Control System capabilities have been improved through a new virtual data acquisition system for plant state variables and an additional Block Signal for a variablemore » lag compensator to represent reactivity feedback for novel shutdown devices. Current code development and maintenance needs are also summarized in three key areas: software quality assurance, modeling improvements, and maintenance of related tools. With ongoing support, SAS4A/SASSYS-1 can continue to fulfill its growing role in fast reactor safety analysis and help solidify DOE’s leadership role in fast reactor safety both domestically and in international collaborations.« less
Automated Theorem Proving in High-Quality Software Design
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
The amount and complexity of software developed during the last few years has increased tremendously. In particular, programs are being used more and more in embedded systems (from car-brakes to plant-control). Many of these applications are safety-relevant, i.e. a malfunction of hardware or software can cause severe damage or loss. Tremendous risks are typically present in the area of aviation, (nuclear) power plants or (chemical) plant control. Here, even small problems can lead to thousands of casualties and huge financial losses. Large financial risks also exist when computer systems are used in the area of telecommunication (telephone, electronic commerce) or space exploration. Computer applications in this area are not only subject to safety considerations, but also security issues are important. All these systems must be designed and developed to guarantee high quality with respect to safety and security. Even in an industrial setting which is (or at least should be) aware of the high requirements in Software Engineering, many incidents occur. For example, the Warshaw Airbus crash, was caused by an incomplete requirements specification. Uncontrolled reuse of an Ariane 4 software module was the reason for the Ariane 5 disaster. Some recent incidents in the telecommunication area, like illegal "cloning" of smart-cards of D2GSM handies, or the extraction of (secret) passwords from German T-online users show that also in this area serious flaws can happen. Due to the inherent complexity of computer systems, most authors claim that only a rigorous application of formal methods in all stages of the software life cycle can ensure high quality of the software and lead to real safe and secure systems. In this paper, we will have a look, in how far automated theorem proving can contribute to a more widespread application of formal methods and their tools, and what automated theorem provers (ATPs) must provide in order to be useful.
Managing Risk in Safety Critical Operations - Lessons Learned from Space Operations
NASA Technical Reports Server (NTRS)
Gonzalez, Steven A.
2002-01-01
The Mission Control Center (MCC) at Johnson Space Center (JSC) has a rich legacy of supporting Human Space Flight operations throughout the Apollo, Shuttle and International Space Station eras. Through the evolution of ground operations and the Mission Control Center facility, NASA has gained a wealth of experience of what it takes to manage the risk in Safety Critical Operations, especially when human life is at risk. The focus of the presentation will be on the processes (training, operational rigor, team dynamics) that enable the JSC/MCC team to be so successful. The presentation will also share the evolution of the Mission Control Center architecture and how the evolution was introduced while managing the risk to the programs supported by the team. The details of the MCC architecture (e.g., the specific software, hardware or tools used in the facility) will not be shared at the conference since it would not give any additional insight as to how risk is managed in Space Operations.
Integrated Systems Health Management (ISHM) Toolkit
NASA Technical Reports Server (NTRS)
Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim
2013-01-01
A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.
Ocloo, Josephine E; Fulop, Naomi J
2012-12-01
There has been considerable momentum within the NHS over the last 10 years to develop greater patient and public involvement (PPI). This commitment has been reflected in numerous policy initiatives. In patient safety, the drive to increase involvement has increasingly been seen as an important way of building a safety culture. Evidence suggests, however, that progress has been slow and even more variable than in health care generally. Given this context, the paper analyses some of the key underlying drivers for involvement in the wider context of health and social care and makes some suggestions on what lessons can be learned for developing the PPI agenda in patient safety. To develop PPI further, it is argued that a greater understanding is needed of the contested nature of involvement in patient safety and how this has similarities to the emergence of user involvement in other parts of the public services. This understanding has led to the development of a range of critical theories to guide involvement that also make more explicit the underlying factors that support and hinder involvement processes, often related to power inequities and control. Achieving greater PPI in patient safety is therefore seen to require a more critical framework for understanding processes of involvement that can also help guide and evaluate involvement practices. © 2011 Blackwell Publishing Ltd.
Nuclear Criticality Safety Data Book
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenbach, D. F.
The objective of this document is to support the revision of criticality safety process studies (CSPSs) for the Uranium Processing Facility (UPF) at the Y-12 National Security Complex (Y-12). This design analysis and calculation (DAC) document contains development and justification for generic inputs typically used in Nuclear Criticality Safety (NCS) DACs to model both normal and abnormal conditions of processes at UPF to support CSPSs. This will provide consistency between NCS DACs and efficiency in preparation and review of DACs, as frequently used data are provided in one reference source.
Developing open-source codes for electromagnetic geophysics using industry support
NASA Astrophysics Data System (ADS)
Key, K.
2017-12-01
Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.
Automation for System Safety Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
2009-01-01
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Software Issues at the User Interface
1991-05-01
successful integration of parallel computers into mainstream scientific computing. Clearly a compiler is the most important software tool available to a...Computer Science University of Colorado Boulder, CO 80309 ABSTRACT We review software issues that are critical to the successful integration of parallel...The development of an optimizing compiler of this quality, addressing communicaton instructions as well as computational instructions is a major
49 CFR Appendix C to Part 236 - Safety Assurance Criteria and Processes
Code of Federal Regulations, 2010 CFR
2010-10-01
... system (all its elements including hardware and software) must be designed to assure safe operation with... unsafe errors in the software due to human error in the software specification, design, or coding phases... (hardware or software, or both) are used in combination to ensure safety. If a common mode failure exists...
[Research progress of probe design software of oligonucleotide microarrays].
Chen, Xi; Wu, Zaoquan; Liu, Zhengchun
2014-02-01
DNA microarray has become an essential medical genetic diagnostic tool for its high-throughput, miniaturization and automation. The design and selection of oligonucleotide probes are critical for preparing gene chips with high quality. Several sets of probe design software have been developed and are available to perform this work now. Every set of the software aims to different target sequences and shows different advantages and limitations. In this article, the research and development of these sets of software are reviewed in line with three main criteria, including specificity, sensitivity and melting temperature (Tm). In addition, based on the experimental results from literatures, these sets of software are classified according to their applications. This review will be helpful for users to choose an appropriate probe-design software. It will also reduce the costs of microarrays, improve the application efficiency of microarrays, and promote both the research and development (R&D) and commercialization of high-performance probe design software.
ERIC Educational Resources Information Center
Eftekhari, Maryam; Sotoudehnama, Elaheh; Marandi, S. Susan
2016-01-01
Developing higher-order critical thinking skills as one of the central objectives of education has been recently facilitated via software packages. Whereas one such technology as computer-aided argument mapping is reported to enhance levels of critical thinking (van Gelder 2001), its application as a pedagogical tool in English as a Foreign…
An aspect-oriented approach for designing safety-critical systems
NASA Astrophysics Data System (ADS)
Petrov, Z.; Zaykov, P. G.; Cardoso, J. P.; Coutinho, J. G. F.; Diniz, P. C.; Luk, W.
The development of avionics systems is typically a tedious and cumbersome process. In addition to the required functions, developers must consider various and often conflicting non-functional requirements such as safety, performance, and energy efficiency. Certainly, an integrated approach with a seamless design flow that is capable of requirements modelling and supporting refinement down to an actual implementation in a traceable way, may lead to a significant acceleration of development cycles. This paper presents an aspect-oriented approach supported by a tool chain that deals with functional and non-functional requirements in an integrated manner. It also discusses how the approach can be applied to development of safety-critical systems and provides experimental results.
NASA Technical Reports Server (NTRS)
Orr, James K.; Peltier, Daryl
2010-01-01
Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.
Deal, Shanley B; Stefanidis, Dimitrios; Brunt, L Michael; Alseidi, Adnan
2017-05-01
We sought to determine the feasibility of developing a multimedia educational tutorial to teach learners to assess the critical view of safety using input from expert surgeons, non-surgeons and crowd-sourcing. We intended to develop a tutorial that would teach learners how to identify the basic anatomy and physiology of the gallbladder, identify the components of the critical view of safety criteria, and understand its significance for performing a safe gallbladder removal. Using rounds of assessment with experts, laypersons and crowd-workers we developed an educational video with improving comprehension after each round of revision. We demonstrate that the development of a multimedia educational tool to educate learners of various backgrounds is feasible using an iterative review process that incorporates the input of experts and crowd sourcing. When planning the development of an educational tutorial, a step-wise approach as described herein should be considered. Copyright © 2017 Elsevier Inc. All rights reserved.
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.
2013-01-01
Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results. PMID:24155789
Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H
2013-01-01
Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results.
Yu, Naichang; Xia, Ping; Mastroianni, Anthony; Kolar, Matthew D; Chao, Samuel T; Greskovich, John F; Suh, John H
Process consistency in planning and delivery of radiation therapy is essential to maintain patient safety and treatment quality and efficiency. Ensuring the timely completion of each critical clinical task is one aspect of process consistency. The purpose of this work is to report our experience in implementing a quantitative metric and automatic auditing program (QMAP) with a goal of improving the timely completion of critical clinical tasks. Based on our clinical electronic medical records system, we developed a software program to automatically capture the completion timestamp of each critical clinical task while providing frequent alerts of potential delinquency. These alerts were directed to designated triage teams within a time window that would offer an opportunity to mitigate the potential for late completion. Since July 2011, 18 metrics were introduced in our clinical workflow. We compared the delinquency rates for 4 selected metrics before the implementation of the metric with the delinquency rate of 2016. One-tailed Student t test was used for statistical analysis RESULTS: With an average of 150 daily patients on treatment at our main campus, the late treatment plan completion rate and late weekly physics check were reduced from 18.2% and 8.9% in 2011 to 4.2% and 0.1% in 2016, respectively (P < .01). The late weekly on-treatment physician visit rate was reduced from 7.2% in 2012 to <1.6% in 2016. The yearly late cone beam computed tomography review rate was reduced from 1.6% in 2011 to <0.1% in 2016. QMAP is effective in reducing late completions of critical tasks, which can positively impact treatment quality and patient safety by reducing the potential for errors resulting from distractions, interruptions, and rush in completion of critical tasks. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Critical Software for Human Spaceflight
NASA Technical Reports Server (NTRS)
Preden, Antonio; Kaschner, Jens; Rettig, Felix; Rodriggs, Michael
2017-01-01
The NASA Orion vehicle that will fly to the moon in the next years is propelled along its mission by the European Service Module (ESM), developed by ESA and its prime contractor Airbus Defense and Space. This paper describes the development of the Propulsion Drive Electronics (PDE) Software that provides the interface between the propulsion hardware of the European Service Module with the Orion flight computers, and highlights the challenges that have been faced during the development. Particularly, the specific aspects relevant to Human Spaceflight in an international cooperation are presented, as the compliance to both European and US standards and the software criticality classification to the highest category A. An innovative aspect of the PDE SW is its Time- Triggered Ethernet interface with the Orion Flight Computers, which has never been flown so far on any European spacecraft. Finally the verification aspects are presented, applying the most exigent quality requirements defined in the European Cooperation for Space Standardization (ECSS) standards such as the structural coverage analysis of the object code and the recourse to an independent software verification and validation activity carried on in parallel by a different team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tenney, J.L.
SARS is a data acquisition system designed to gather and process radar data from aircraft flights. A database of flight trajectories has been developed for Albuquerque, NM, and Amarillo, TX. The data is used for safety analysis and risk assessment reports. To support this database effort, Sandia developed a collection of hardware and software tools to collect and post process the aircraft radar data. This document describes the data reduction tools which comprise the SARS, and maintenance procedures for the hardware and software system.
Space Station Furnace Facility Management Information System (SSFF-MIS) Development
NASA Technical Reports Server (NTRS)
Meade, Robert M.
1996-01-01
This report summarizes the chronology, results, and lessons learned from the development of the SSFF-MIS. This system has been nearly two years in development and has yielded some valuable insights into specialized MIS development. General: In December of 1994, the Camber Corporation and Science Applications International Corporation (SAIC) were contracted to design, develop, and implement a MIS for Marshall Space Flight Center's Space Station Furnace Facility Project. The system was to be accessible from both EBM-Compatible PC and Macintosh platforms. The system was required to contain data manually entered into the MIS as well as data imported from other MSFC sources. Electronic interfaces were established for each data source and retrieval was to be performed at prescribed time intervals. The SOW requirement that predominantly drove the development software selection was the dual-platform (IBM-PC and Macintosh) requirement. The requirement that the system would be maintained by Government personnel influenced the selection of Commercial Off-the-shelf software because of its inherent stability and readily available documentation and support. Microsoft FoxPro Professional 2.6 for Windows and Macintosh was selected as the development tool. This is a software development tool that has been in use for many years. It is stable and powerful. Microsoft has since released the replacement for this product, Microsoft Visual FoxPro, but at the time of this development, it was only available on the Windows platform. The initial contract included included the requirement for capabilities relating to the Work- and Organizational Breakdown Structures, cost (plan and actuals), workforce (plan and actuals), critical path scheduling, trend analysis, procurements and contracts, interface to manufacturing, Safety and Mission Assurance, risk analysis, and technical performance indicators. It also required full documentation of the system and training of users. During the course of the contract, the requirements for Safety and Mission Assurance interface, risk analysis, and technical performance indicators were deleted. Additional capabilities were added as reflected in the Contract Chronology below. Modification 4 added the requirement for Support Contractor manpower data, the ability to manually input data not imported from non-nal sources, a general 'health' indicator screen, and remote usage. Mod 6 included the ability to change the level of planning of Civil Service Manpower at any time and the ability to manually enter Op Codes in the manufacturing data where such codes were not provided by the EMPACS database. Modification 9 included a number of changes to report contents and formats. Modification 11 required the preparation of a detailed System Design Document.
Motor vehicle occupant safety survey
DOT National Transportation Integrated Search
1995-09-01
This report presents findings from the first Motor Vehicle Occupant Safety Survey. The National Highway Traffic Safety Administration (NHTSA) conducted this survey to collect critical information needed by the agency to develop and implement effectiv...
Ball, Brita; Wilcock, Anne; Aung, May
2009-06-01
Small and medium sized food businesses have been slow to adopt food safety management systems (FSMSs) such as good manufacturing practices and Hazard Analysis Critical Control Point (HACCP). This study identifies factors influencing workers in their implementation of food safety practices in small and medium meat processing establishments in Ontario, Canada. A qualitative approach was used to explore in-plant factors that influence the implementation of FSMSs. Thirteen in-depth interviews in five meat plants and two focus group interviews were conducted. These generated 219 pages of verbatim transcripts which were analysed using NVivo 7 software. Main themes identified in the data related to production systems, organisational characteristics and employee characteristics. A socio-psychological model based on the theory of planned behaviour is proposed to describe how these themes and underlying sub-themes relate to FSMS implementation. Addressing the various factors that influence production workers is expected to enhance FSMS implementation and increase food safety.
Air Data Report Improves Flight Safety
NASA Technical Reports Server (NTRS)
2007-01-01
NASA's Aviation Safety Program in the NASA Aeronautics Research Mission Directorate, which seeks to make aviation safer by developing tools for flight data analysis and interpretation and then by transferring these tools to the aviation industry, sponsored the development of Morning Report software. The software, created at Ames Research Center with the assistance of the Pacific Northwest National Laboratory, seeks to detect atypicalities without any predefined parameters-it spots deviations and highlights them. In 2004, Sagem Avionics Inc. entered a licensing agreement with NASA for the commercialization of the Morning Report software, and also licensed the NASA Aviation Data Integration System (ADIS) tool, which allows for the integration of data from disparate sources into the flight data analysis process. Sagem Avionics incorporated the Morning Report tool into its AGS product, a comprehensive flight operations monitoring system that helps users detect irregular or divergent practices, technical flaws, and problems that might develop when aircraft operate outside of normal procedures. Sagem developed AGS in collaboration with airlines, so that the system takes into account their technical evolutions and needs, and each airline is able to easily perform specific treatments and to build its own flight data analysis system. Further, the AGS is designed to support any aircraft and flight data recorders.
Lessons learned in deploying software estimation technology and tools
NASA Technical Reports Server (NTRS)
Panlilio-Yap, Nikki; Ho, Danny
1994-01-01
Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.
The FoReVer Methodology: A MBSE Framework for Formal Verification
NASA Astrophysics Data System (ADS)
Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald
2013-08-01
The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.
Preliminary Structural Sizing and Alternative Material Trade Study of CEV Crew Module
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steve M.; Collier, Craig S.; Yarrington, Phillip W.
2007-01-01
This paper presents the results of a preliminary structural sizing and alternate material trade study for NASA s Crew Exploration Vehicle (CEV) Crew Module (CM). This critical CEV component will house the astronauts during ascent, docking with the International Space Station, reentry, and landing. The alternate material design study considers three materials beyond the standard metallic (aluminum alloy) design that resulted from an earlier NASA Smart Buyer Team analysis. These materials are graphite/epoxy composite laminates, discontinuously reinforced SiC/Al (DRA) composites, and a novel integrated panel material/concept known as WebCore. Using the HyperSizer (Collier Research and Development Corporation) structural sizing software and NASTRAN finite element analysis code, a comparison is made among these materials for the three composite CM concepts considered by the 2006 NASA Engineering and Safety Center Composite Crew Module project.
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
Factors associated with the patient safety climate at a teaching hospital1
Luiz, Raíssa Bianca; Simões, Ana Lúcia de Assis; Barichello, Elizabeth; Barbosa, Maria Helena
2015-01-01
Objectives: to investigate the association between the scores of the patient safety climate and socio-demographic and professional variables. Methods: an observational, sectional and quantitative study, conducted at a large public teaching hospital. The Safety Attitudes Questionnaire was used, translated and validated for Brazil. Data analysis used the software Statistical Package for Social Sciences. In the bivariate analysis, we used Student's t-test, analysis of variance and Spearman's correlation of (α=0.05). To identify predictors for the safety climate scores, multiple linear regression was used, having the safety climate domain as the main outcome (α=0.01). Results: most participants were women, nursing staff, who worked in direct care to adult patients in critical areas, without a graduate degree and without any other employment. The average and median total score of the instrument corresponded to 61.8 (SD=13.7) and 63.3, respectively. The variable professional performance was found as a factor associated with the safety environment for the domain perception of service management and hospital management (p=0.01). Conclusion: the identification of factors associated with the safety environment permits the construction of strategies for safe practices in the hospitals. PMID:26487138
Autonomous Flight Safety System
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Santuro, Steve; Simpson, James; Zoerner, Roger; Bull, Barton; Lanzi, Jim
2004-01-01
Autonomous Flight Safety System (AFSS) is an independent flight safety system designed for small to medium sized expendable launch vehicles launching from or needing range safety protection while overlying relatively remote locations. AFSS replaces the need for a man-in-the-loop to make decisions for flight termination. AFSS could also serve as the prototype for an autonomous manned flight crew escape advisory system. AFSS utilizes onboard sensors and processors to emulate the human decision-making process using rule-based software logic and can dramatically reduce safety response time during critical launch phases. The Range Safety flight path nominal trajectory, its deviation allowances, limit zones and other flight safety rules are stored in the onboard computers. Position, velocity and attitude data obtained from onboard global positioning system (GPS) and inertial navigation system (INS) sensors are compared with these rules to determine the appropriate action to ensure that people and property are not jeopardized. The final system will be fully redundant and independent with multiple processors, sensors, and dead man switches to prevent inadvertent flight termination. AFSS is currently in Phase III which includes updated algorithms, integrated GPS/INS sensors, large scale simulation testing and initial aircraft flight testing.
Security Risks: Management and Mitigation in the Software Life Cycle
NASA Technical Reports Server (NTRS)
Gilliam, David P.
2004-01-01
A formal approach to managing and mitigating security risks in the software life cycle is requisite to developing software that has a higher degree of assurance that it is free of security defects which pose risk to the computing environment and the organization. Due to its criticality, security should be integrated as a formal approach in the software life cycle. Both a software security checklist and assessment tools should be incorporated into this life cycle process and integrated with a security risk assessment and mitigation tool. The current research at JPL addresses these areas through the development of a Sotfware Security Assessment Instrument (SSAI) and integrating it with a Defect Detection and Prevention (DDP) risk management tool.
ERIC Educational Resources Information Center
Long, Ju
2009-01-01
Open Source Software (OSS) is a major force in today's Information Technology (IT) landscape. Companies are increasingly using OSS in mission-critical applications. The transparency of the OSS technology itself with openly available source codes makes it ideal for students to participate in the OSS project development. OSS can provide unique…
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
Testing of Hand-Held Mine Detection Systems
2015-01-08
ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test
Software Technology Transfer and Export Control.
1981-01-01
development projects of their own. By analogy, a Soviet team might be able to repeat the learning experience of the ADEPT-50 junior staff...recommendations concerning product form and further study . The posture of this group has been to consider software technology and its transfer as a process...and views of the Software Subgroup of Technical Working Group 7 (Computers) of the Critical Technologies Project . The work reported
Crepaldi, Nathalia Yukie; de Lima, Inacia Bezerra; Vicentine, Fernanda Bergamini; Rodrigues, Lídia Maria Lourençon; Sanches, Tiago Lara Michelin; Ruffino-Netto, Antonio; Alves, Domingos; Rijo, Rui Pedro Charters Lopes
2018-05-08
Assessment of health information systems consider different aspects of the system itself. They focus or on the professional who will use the software or on its usability or on the software engineering metrics or on financial and managerial issues. The existent approaches are very resources consuming, disconnected, and not standardized. As the software becomes more critical in the health organizations and in patients, becoming used as a medical device or a medicine, there is an urgency to identify tools and methods that can be applied in the development process. The present work is one of the steps of a broader study to identify standardized protocols to evaluate the health information systems as medicines and medical devices are evaluated by clinical trials. The goal of the present work was to evaluate the effect of the introduction of an information system for monitoring tuberculosis treatment (SISTB) in a Brazilian municipality from the patients' perspective. The Patient Satisfaction Questionnaire and the Hospital Consumer Assessment of Healthcare Providers and Systems were answered by the patients before and after the SISTB introduction, for comparison. Patients from an outpatient clinic, formed the control group, that is, at this site was not implanted the SISTB. Descriptive statistics and mixed effects model were used for data analysis. Eighty-eight interviews were conducted in the study. The questionnaire's results presented better averages after the system introduction but were not considered statistically significant. Therefore, it was not possible to associate system implantation with improved patient satisfaction. The HIS evaluation need be complete, the technical and managerial evaluation, the safety, the impact on the professionals and direct and/or indirect impact on patients are important. Developing the right tools and methods that can evaluate the software in its entirety, from the beginning of the development cycle with a normalized scale, are needed.
NASA Astrophysics Data System (ADS)
Faria, J. M.; Mahomad, S.; Silva, N.
2009-05-01
The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.
Sorbello, Alfred; Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier
2017-03-22
We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers' capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. All usability test participants cited the tool's ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool's automated literature search relative to a manual 'all fields' PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidd, M.E.C.
1997-02-01
The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.
Guidance and Control Software Project Data - Volume 2: Development Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the development documents from the GCS project. Volume 2 contains three appendices: A. Guidance and Control Software Development Specification; B. Design Description for the Pluto Implementation of the Guidance and Control Software; and C. Source Code for the Pluto Implementation of the Guidance and Control Software
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Investigation of criticality safety control infraction data at a nuclear facility
Cournoyer, Michael E.; Merhege, James F.; Costa, David A.; ...
2014-10-27
Chemical and metallurgical operations involving plutonium and other nuclear materials account for most activities performed at the LANL's Plutonium Facility (PF-4). The presence of large quantities of fissile materials in numerous forms at PF-4 makes it necessary to maintain an active criticality safety program. The LANL Nuclear Criticality Safety (NCS) Program provides guidance to enable efficient operations while ensuring prevention of criticality accidents in the handling, storing, processing and transportation of fissionable material at PF-4. In order to achieve and sustain lower criticality safety control infraction (CSCI) rates, PF-4 operations are continuously improved, through the use of Lean Manufacturing andmore » Six Sigma (LSS) business practices. Employing LSS, statistically significant variations (trends) can be identified in PF-4 CSCI reports. In this study, trends have been identified in the NCS Program using the NCS Database. An output metric has been developed that measures ADPSM Management progress toward meeting its NCS objectives and goals. Using a Pareto Chart, the primary CSCI attributes have been determined in order of those requiring the most management support. Data generated from analysis of CSCI data help identify and reduce number of corresponding attributes. In-field monitoring of CSCI's contribute to an organization's scientific and technological excellence by providing information that can be used to improve criticality safety operation safety. This increases technical knowledge and augments operational safety.« less
A critical care network pressure ulcer prevention quality improvement project.
McBride, Joanna; Richardson, Annette
2015-03-30
Pressure ulcer prevention is an important safety issue, often underrated and an extremely painful event harming patients. Critically ill patients are one of the highest risk groups in hospital. The impact of pressure ulcers are wide ranging, and they can result in increased critical care and the hospital length of stay, significant interference with functional recovery and rehabilitation and increase cost. This quality improvement project had four aims: (1) to establish a critical care network pressure ulcer prevention group; (2) to establish baseline pressure ulcer prevention practices; (3) to measure, compare and monitor pressure ulcers prevalence; (4) to develop network pressure ulcer prevention standards. The approach used to improve quality included strong critical care nursing leadership to develop a cross-organisational pressure ulcer prevention group and a benchmarking exercise of current practices across a well-established critical care Network in the North of England. The National Safety Thermometer tool was used to measure pressure ulcer prevalence in 23 critical care units, and best available evidence, local consensus and another Critical Care Networks' bundle of interventions were used to develop a local pressure ulcer prevention standards document. The aims of the quality improvement project were achieved. This project was driven by successful leadership and had an agreed common goal. The National Safety Thermometer tool was an innovative approach to measure and compare pressure ulcer prevalence rates at a regional level. A limitation was the exclusion of moisture lesions. The project showed excellent engagement and collaborate working in the quest to prevent pressure ulcers from many critical care nurses with the North of England Critical Care Network. A concise set of Network standards was developed for use in conjunction with local guidelines to enhance pressure ulcer prevention. © 2015 British Association of Critical Care Nurses.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
SureTrak Probability of Impact Display
NASA Technical Reports Server (NTRS)
Elliott, John
2012-01-01
The SureTrak Probability of Impact Display software was developed for use during rocket launch operations. The software displays probability of impact information for each ship near the hazardous area during the time immediately preceding the launch of an unguided vehicle. Wallops range safety officers need to be sure that the risk to humans is below a certain threshold during each use of the Wallops Flight Facility Launch Range. Under the variable conditions that can exist at launch time, the decision to launch must be made in a timely manner to ensure a successful mission while not exceeding those risk criteria. Range safety officers need a tool that can give them the needed probability of impact information quickly, and in a format that is clearly understandable. This application is meant to fill that need. The software is a reuse of part of software developed for an earlier project: Ship Surveillance Software System (S4). The S4 project was written in C++ using Microsoft Visual Studio 6. The data structures and dialog templates from it were copied into a new application that calls the implementation of the algorithms from S4 and displays the results as needed. In the S4 software, the list of ships in the area was received from one local radar interface and from operators who entered the ship information manually. The SureTrak Probability of Impact Display application receives ship data from two local radars as well as the SureTrak system, eliminating the need for manual data entry.
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
Analysis of Phoenix Anomalies and IV & V Findings Applied to the GRAIL Mission
NASA Technical Reports Server (NTRS)
Larson, Steve
2012-01-01
NASA IV&V was established in 1993 to improve safety and cost-effectiveness of mission critical software. Since its inception the tools and strategies employed by IV&V have evolved. This paper examines how lessons learned from the Phoenix project were developed and applied to the GRAIL project. Shortly after selection, the GRAIL project initiated a review of the issues documented by IV&V for Phoenix. The motivation was twofold: the learn as much as possible about the types of issues that arose from the flight software product line slated for use on GRAIL, and to identify opportunities for improving the effectiveness of IV&V on GRAIL. The IV&V Facility provided a database dump containing 893 issues. These were categorized into 16 bins, and then analyzed according to whether the project responded by changing the affected artifacts or using as-is. The results of this analysis were compared to a similar assessment of post-launch anomalies documented by the project. Results of the analysis were discussed with the IV&V team assigned to GRAIL. These discussions led to changes in the way both the project and IV&V approached the IV&V task, and improved the efficiency of the activity.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)
2001-01-01
The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
Health Monitor for Multitasking, Safety-Critical, Real-Time Software
NASA Technical Reports Server (NTRS)
Zoerner, Roger
2011-01-01
Health Manager can detect Bad Health prior to a failure occurring by periodically monitoring the application software by looking for code corruption errors, and sanity-checking each critical data value prior to use. A processor s memory can fail and corrupt the software, or the software can accidentally write to the wrong address and overwrite the executing software. This innovation will continuously calculate a checksum of the software load to detect corrupted code. This will allow a system to detect a failure before it happens. This innovation monitors each software task (thread) so that if any task reports "bad health," or does not report to the Health Manager, the system is declared bad. The Health Manager reports overall system health to the outside world by outputting a square wave signal. If the square wave stops, this indicates that system health is bad or hung and cannot report. Either way, "bad health" can be detected, whether caused by an error, corrupted data, or a hung processor. A separate Health Monitor Task is started and run periodically in a loop that starts and stops pending on a semaphore. Each monitored task registers with the Health Manager, which maintains a count for the task. The registering task must indicate if it will run more or less often than the Health Manager. If the task runs more often than the Health Manager, the monitored task calls a health function that increments the count and verifies it did not go over max-count. When the periodic Health Manager runs, it verifies that the count did not go over the max-count and zeroes it. If the task runs less often than the Health Manager, the periodic Health Manager will increment the count. The monitored task zeroes the count, and both the Health Manager and monitored task verify that the count did not go over the max-count.
A focused approach to safety guidebook.
DOT National Transportation Integrated Search
2011-08-23
"The Federal Highway Administration (FHWA) has developed the Focused Approach to Safety in order to better address the most critical safety challenges by devoting additional attention to high priority States. The purpose of the Focused Approach is to...
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less
Employing Service Oriented Architecture Technologies to Bind a Thousand Ship Navy
2008-06-01
critical of the software lifecycle ( Pressman , 272). This remains true with SOA technologies. Theoretically, SOA provides a rapid development and... Pressman , R. S., “Software Engineering, A Practitioner’s Approach Fifth Edition”, McGraw-Hill, New York, 2001 4. Space and Naval Warfare Systems Center
1986-04-14
CONCIPT DIFINITION OIVILOPMINTITIST I OPERATION ANO ■ MAINTENANCE ■ TRACK MOifCTIO PROGRAMS • «VIIW CRITICAL ISSUIS . Mt PARI INPUTS TO PMO...development and beyond, evaluation criteria must Include quantitative goals (the desired value) and thresholds (the value beyond which the charac
NASA Technical Reports Server (NTRS)
Rushby, John; Crow, Judith
1990-01-01
The authors explore issues in the specification, verification, and validation of artificial intelligence (AI) based software, using a prototype fault detection, isolation and recovery (FDIR) system for the Manned Maneuvering Unit (MMU). They use this system as a vehicle for exploring issues in the semantics of C-Language Integrated Production System (CLIPS)-style rule-based languages, the verification of properties relating to safety and reliability, and the static and dynamic analysis of knowledge based systems. This analysis reveals errors and shortcomings in the MMU FDIR system and raises a number of issues concerning software engineering in CLIPs. The authors came to realize that the MMU FDIR system does not conform to conventional definitions of AI software, despite the fact that it was intended and indeed presented as an AI system. The authors discuss this apparent disparity and related questions such as the role of AI techniques in space and aircraft operations and the suitability of CLIPS for critical applications.
Service-oriented architecture for the ARGOS instrument control software
NASA Astrophysics Data System (ADS)
Borelli, J.; Barl, L.; Gässler, W.; Kulas, M.; Rabien, Sebastian
2012-09-01
The Advanced Rayleigh Guided ground layer Adaptive optic System, ARGOS, equips the Large Binocular Telescope (LBT) with a constellation of six rayleigh laser guide stars. By correcting atmospheric turbulence near the ground, the system is designed to increase the image quality of the multi-object spectrograph LUCIFER approximately by a factor of 3 over a field of 4 arc minute diameter. The control software has the critical task of orchestrating several devices, instruments, and high level services, including the already existing adaptive optic system and the telescope control software. All these components are widely distributed over the telescope, adding more complexity to the system design. The approach used by the ARGOS engineers is to write loosely coupled and distributed services under the control of different ownership systems, providing a uniform mechanism to offer, discover, interact and use these distributed capabilities. The control system counts with several finite state machines, vibration and flexure compensation loops, and safety mechanism, such as interlocks, aircraft, and satellite avoidance systems.
Software for the occupational health and safety integrated management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vătăsescu, Mihaela
2015-03-10
This paper intends to present the design and the production of a software for the Occupational Health and Safety Integrated Management System with the view to a rapid drawing up of the system documents in the field of occupational health and safety.
[Preliminary studies on critical control point of traceability system in wolfberry].
Liu, Sai; Xu, Chang-Qing; Li, Jian-Ling; Lin, Chen; Xu, Rong; Qiao, Hai-Li; Guo, Kun; Chen, Jun
2016-07-01
As a traditional Chinese medicine, wolfberry (Lycium barbarum) has a long cultivation history and a good industrial development foundation. With the development of wolfberry production, the expansion of cultivation area and the increased attention of governments and consumers on food safety, the quality and safety requirement of wolfberry is higher demanded. The quality tracing and traceability system of production entire processes is the important technology tools to protect the wolfberry safety, and to maintain sustained and healthy development of the wolfberry industry. Thus, this article analyzed the wolfberry quality management from the actual situation, the safety hazard sources were discussed according to the HACCP (hazard analysis and critical control point) and GAP (good agricultural practice for Chinese crude drugs), and to provide a reference for the traceability system of wolfberry. Copyright© by the Chinese Pharmaceutical Association.
In silico toxicology for the pharmaceutical sciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valerio, Luis G., E-mail: Luis.Valerio@fda.hhs.go
2009-12-15
The applied use of in silico technologies (a.k.a. computational toxicology, in silico toxicology, computer-assisted tox, e-tox, i-drug discovery, predictive ADME, etc.) for predicting preclinical toxicological endpoints, clinical adverse effects, and metabolism of pharmaceutical substances has become of high interest to the scientific community and the public. The increased accessibility of these technologies for scientists and recent regulations permitting their use for chemical risk assessment supports this notion. The scientific community is interested in the appropriate use of such technologies as a tool to enhance product development and safety of pharmaceuticals and other xenobiotics, while ensuring the reliability and accuracy ofmore » in silico approaches for the toxicological and pharmacological sciences. For pharmaceutical substances, this means active and impurity chemicals in the drug product may be screened using specialized software and databases designed to cover these substances through a chemical structure-based screening process and algorithm specific to a given software program. A major goal for use of these software programs is to enable industry scientists not only to enhance the discovery process but also to ensure the judicious use of in silico tools to support risk assessments of drug-induced toxicities and in safety evaluations. However, a great amount of applied research is still needed, and there are many limitations with these approaches which are described in this review. Currently, there is a wide range of endpoints available from predictive quantitative structure-activity relationship models driven by many different computational software programs and data sources, and this is only expected to grow. For example, there are models based on non-proprietary and/or proprietary information specific to assessing potential rodent carcinogenicity, in silico screens for ICH genetic toxicity assays, reproductive and developmental toxicity, theoretical prediction of human drug metabolism, mechanisms of action for pharmaceuticals, and newer models for predicting human adverse effects. How accurate are these approaches is both a statistical issue and challenge in toxicology. In this review, fundamental concepts and the current capabilities and limitations of this technology will be critically addressed.« less
A design and implementation methodology for diagnostic systems
NASA Technical Reports Server (NTRS)
Williams, Linda J. F.
1988-01-01
A methodology for design and implementation of diagnostic systems is presented. Also discussed are the advantages of embedding a diagnostic system in a host system environment. The methodology utilizes an architecture for diagnostic system development that is hierarchical and makes use of object-oriented representation techniques. Additionally, qualitative models are used to describe the host system components and their behavior. The methodology architecture includes a diagnostic engine that utilizes a combination of heuristic knowledge to control the sequence of diagnostic reasoning. The methodology provides an integrated approach to development of diagnostic system requirements that is more rigorous than standard systems engineering techniques. The advantages of using this methodology during various life cycle phases of the host systems (e.g., National Aerospace Plane (NASP)) include: the capability to analyze diagnostic instrumentation requirements during the host system design phase, a ready software architecture for implementation of diagnostics in the host system, and the opportunity to analyze instrumentation for failure coverage in safety critical host system operations.
Applying Formal Methods to NASA Projects: Transition from Research to Practice
NASA Technical Reports Server (NTRS)
Othon, Bill
2009-01-01
NASA project managers attempt to manage risk by relying on mature, well-understood process and technology when designing spacecraft. In the case of crewed systems, the margin for error is even tighter and leads to risk aversion. But as we look to future missions to the Moon and Mars, the complexity of the systems will increase as the spacecraft and crew work together with less reliance on Earth-based support. NASA will be forced to look for new ways to do business. Formal methods technologies can help NASA develop complex but cost effective spacecraft in many domains, including requirements and design, software development and inspection, and verification and validation of vehicle subsystems. To realize these gains, the technologies must be matured and field-tested so that they are proven when needed. During this discussion, current activities used to evaluate FM technologies for Orion spacecraft design will be reviewed. Also, suggestions will be made to demonstrate value to current designers, and mature the technology for eventual use in safety-critical NASA missions.
A Small Angle Scattering Sensor System for the Characterization of Combustion Generated Particulate
NASA Technical Reports Server (NTRS)
Feikema, Douglas A.; Kim, W.; Sivathanu, Yudaya
2007-01-01
One of the critical issues for the US space program is fire safety of the space station and future launch vehicles. A detailed understanding of the scattering signatures of particulate is essential for the development of a false alarm free fire detection system. This paper describes advanced optical instrumentation developed and applied for fire detection. The system is being designed to determine four important physical properties of disperse fractal aggregates and particulates including size distribution, number density, refractive indices, and fractal dimension. Combustion generated particulate are the primary detection target; however, in order to discriminate from other particulate, non-combustion generated particles should also be characterized. The angular scattering signature is measured and analyzed using two photon optical laser scattering. The Rayleigh-Debye-Gans (R-D-G) scattering theory for disperse fractal aggregates is utilized. The system consists of a pulsed laser module, detection module and data acquisition system and software to analyze the signals. The theory and applications are described.
Architecture of a framework for providing information services for public transport.
García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.
Development of guidance for states transitioning to new safety analysis tools
NASA Astrophysics Data System (ADS)
Alluri, Priyanka
With about 125 people dying on US roads each day, the US Department of Transportation heightened the awareness of critical safety issues with the passage of SAFETEA-LU (Safe Accountable Flexible Efficient Transportation Equity Act---a Legacy for Users) legislation in 2005. The legislation required each of the states to develop a Strategic Highway Safety Plan (SHSP) and incorporate data-driven approaches to prioritize and evaluate program outcomes: Failure to do so resulted in funding sanctioning. In conjunction with the legislation, research efforts have also been progressing toward the development of new safety analysis tools such as IHSDM (Interactive Highway Safety Design Model), SafetyAnalyst, and HSM (Highway Safety Manual). These software and analysis tools are comparatively more advanced in statistical theory and level of accuracy, and have a tendency to be more data intensive. A review of the 2009 five-percent reports and excerpts from the nationwide survey revealed astonishing facts about the continuing use of traditional methods including crash frequencies and rates for site selection and prioritization. The intense data requirements and statistical complexity of advanced safety tools are considered as a hindrance to their adoption. In this context, this research aims at identifying the data requirements and data availability for SafetyAnalyst and HSM by working with both the tools. This research sets the stage for working with the Empirical Bayes approach by highlighting some of the biases and issues associated with the traditional methods of selecting projects such as greater emphasis on traffic volume and regression-to-mean phenomena. Further, the not-so-obvious issue with shorter segment lengths, which effect the results independent of the methods used, is also discussed. The more reliable and statistically acceptable Empirical Bayes methodology requires safety performance functions (SPFs), regression equations predicting the relation between crashes and exposure for a subset of roadway network. These SPFs, specific to a region and the analysis period are often unavailable. Calibration of already existing default national SPFs to the state's data could be a feasible solution, but, how well the state's data is represented is a legitimate question. With this background, SPFs were generated for various classifications of segments in Georgia and compared against the national default SPFs used in SafetyAnalyst calibrated to Georgia data. Dwelling deeper into the development of SPFs, the influence of actual and estimated traffic data on the fit of the equations is also studied questioning the accuracy and reliability of traffic estimations. In addition to SafetyAnalyst, HSM aims at performing quantitative safety analysis. Applying HSM methodology to two-way two-lane rural roads, the effect of using multiple CMFs (Crash Modification Factors) is studied. Lastly, data requirements, methodology, constraints, and results are compared between SafetyAnalyst and HSM.
Visual warning system for worker safety on roadside work-zones.
DOT National Transportation Integrated Search
2016-08-01
Growing traffic on US roadways and heavy construction machinery on road construction sites pose a critical safety : threat to construction workers. This report summarizes the design and development of a worker safety system using : Dedicated Short Ra...
NASA Technical Reports Server (NTRS)
Logan, Cory; Maida, James; Goldsby, Michael; Clark, Jim; Wu, Liew; Prenger, Henk
1993-01-01
The Space Station Freedom (SSF) Data Management System (DMS) consists of distributed hardware and software which monitor and control the many onboard systems. Virtual environment and off-the-shelf computer technologies can be used at critical points in project development to aid in objectives and requirements development. Geometric models (images) coupled with off-the-shelf hardware and software technologies were used in The Space Station Mockup and Trainer Facility (SSMTF) Crew Operational Assessment Project. Rapid prototyping is shown to be a valuable tool for operational procedure and system hardware and software requirements development. The project objectives, hardware and software technologies used, data gained, current activities, future development and training objectives shall be discussed. The importance of defining prototyping objectives and staying focused while maintaining schedules are discussed along with project pitfalls.
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2012-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2011-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
Time-Domain Terahertz Computed Axial Tomography NDE System
NASA Technical Reports Server (NTRS)
Zimdars, David
2012-01-01
NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D slice data with better signal-to-noise using a COTS scanner rather than the gchirped h scanner. The system also reduced to practice a prototype for commercial CT systems for insulating materials where safety concerns cannot accommodate x-ray. A software script was written to automate the COTS software to collect and process TD-THz CT data.
Criticality Safety Basics for INL FMHs and CSOs
DOE Office of Scientific and Technical Information (OSTI.GOV)
V. L. Putman
2012-04-01
Nuclear power is a valuable and efficient energy alternative in our energy-intensive society. However, material that can generate nuclear power has properties that require this material be handled with caution. If improperly handled, a criticality accident could result, which could severely harm workers. This document is a modular self-study guide about Criticality Safety Principles. This guide's purpose it to help you work safely in areas where fissionable nuclear materials may be present, avoiding the severe radiological and programmatic impacts of a criticality accident. It is designed to stress the fundamental physical concepts behind criticality controls and the importance of criticalitymore » safety when handling fissionable materials outside nuclear reactors. This study guide was developed for fissionable-material-handler and criticality-safety-officer candidates to use with related web-based course 00INL189, BEA Criticality Safety Principles, and to help prepare for the course exams. These individuals must understand basic information presented here. This guide may also be useful to other Idaho National Laboratory personnel who must know criticality safety basics to perform their assignments safely or to design critically safe equipment or operations. This guide also includes additional information that will not be included in 00INL189 tests. The additional information is in appendices and paragraphs with headings that begin with 'Did you know,' or with, 'Been there Done that'. Fissionable-material-handler and criticality-safety-officer candidates may review additional information at their own discretion. This guide is revised as needed to reflect program changes, user requests, and better information. Issued in 2006, Revision 0 established the basic text and integrated various programs from former contractors. Revision 1 incorporates operation and program changes implemented since 2006. It also incorporates suggestions, clarifications, and additional information from readers and from personnel who took course 00INL189. Revision 1 also completely reorganized the training to better emphasize physical concepts behind the criticality controls that fissionable material handlers and criticality safety officers must understand. The reorganization is based on and consistent with changes made to course 00INL189 due to a review of course exam results and to discussions with personnel who conduct area-specific training.« less
Development of an Aeromedical Scientific Information System for Aviation Safety
2008-01-01
math- ematics, engineering, computer hardware, software , and networking, was assembled to glean the most knowledge from the complicated aeromedical...9, SPlus Enterprise Developer 8, and Insightful Miner version 7. Process flow charts were done with SmartDraw Suite Edition version 7. Static and
Improving software quality - The use of formal inspections at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Bush, Marilyn
1990-01-01
The introduction of software formal inspections (Fagan Inspections) at JPL for finding and fixing defects early in the software development life cycle are reviewed. It is estimated that, by the year 2000, some software efforts will rise to as much as 80 percent of the total. Software problems are especially important at NASA as critical flight software must be error-free. It is shown that formal inspections are particularly effective at finding and removing defects having to do with clarity, correctness, consistency, and completeness. A very significant discovery was that code audits were not as effective at finding defects as code inspections.