Reliability/safety analysis of a fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goddman, H. A.
1980-01-01
An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
European Workshop Industrical Computer Science Systems approach to design for safety
NASA Technical Reports Server (NTRS)
Zalewski, Janusz
1992-01-01
This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.
RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications
NASA Technical Reports Server (NTRS)
1992-01-01
This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.
Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety
NASA Technical Reports Server (NTRS)
Heatwole, Scott; Lanzi, Raymond J.
2010-01-01
The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.
The research of computer network security and protection strategy
NASA Astrophysics Data System (ADS)
He, Jian
2017-05-01
With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.
NASA Technical Reports Server (NTRS)
Holden, D. G.
1975-01-01
Hard Over Monitoring Equipment (HOME) has been designed to complement and enhance the flight safety of a flight research helicopter. HOME is an independent, highly reliable, and fail-safe special purpose computer that monitors the flight control commands issued by the flight control computer of the helicopter. In particular, HOME detects the issuance of a hazardous hard-over command for any of the four flight control axes and transfers the control of the helicopter to the flight safety pilot. The design of HOME incorporates certain reliability and fail-safe enhancement design features, such as triple modular redundancy, majority logic voting, fail-safe dual circuits, independent status monitors, in-flight self-test, and a built-in preflight exerciser. The HOME design and operation is described with special emphasis on the reliability and fail-safe aspects of the design.
DOT National Transportation Integrated Search
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
A probability-based approach for assessment of roadway safety hardware.
DOT National Transportation Integrated Search
2017-03-14
This report presents a general probability-based approach for assessment of roadway safety hardware (RSH). It was achieved using a reliability : analysis method and computational techniques. With the development of high-fidelity finite element (FE) m...
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
Towards An Engineering Discipline of Computational Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mili, Ali; Sheldon, Frederick T; Jilani, Lamia Labed
2007-01-01
George Boole ushered the era of modern logic by arguing that logical reasoning does not fall in the realm of philosophy, as it was considered up to his time, but in the realm of mathematics. As such, logical propositions and logical arguments are modeled using algebraic structures. Likewise, we submit that security attributes must be modeled as formal mathematical propositions that are subject to mathematical analysis. In this paper, we approach this problem by attempting to model security attributes in a refinement-like framework that has traditionally been used to represent reliability and safety claims. Keywords: Computable security attributes, survivability, integrity,more » dependability, reliability, safety, security, verification, testing, fault tolerance.« less
Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations
ERIC Educational Resources Information Center
Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.
2009-01-01
Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Work-a-day world of NPRDS: what makes it tick
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Nuclear Plant Reliability Data System (NPRDS) is a computer-based data bank of reliability information on safety-related nuclear-power-plant systems and components. Until January 1982, the system was administered by the American Nuclear Society 58.20 Subcommittee. The data base was maintained by Southwest Research Institute in San Antonio, Texas. In October 1982, it was decided that the Institute of Nuclear Power Operations (INPO) would maintain the data base on its own computer. The transition is currently in progress.
Software Reliability Issues Concerning Large and Safety Critical Software Systems
NASA Technical Reports Server (NTRS)
Kamel, Khaled; Brown, Barbara
1996-01-01
This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Probabilistic design of fibre concrete structures
NASA Astrophysics Data System (ADS)
Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.
2017-09-01
Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).
The 12th International Conference on Computer Safety, Reliability and Security
1993-10-29
then used [10]. The adequacy of the proposed methodology is shown through the design and the validation of a simple control system: a train set example...satisfying the safety condition. 4 Conclusions In this paper we have presented a methodology which can be used for the design of safety-critical systems...has a Burner but no Detector (or the Detector is permanently non -active). The PA: G1 for this design is shown in Fig 3a. The probability matrices are
Zuck, T F; Cumming, P D; Wallace, E L
2001-12-01
The safety of blood for transfusion depends, in part, on the reliability of the health history given by volunteer blood donors. To improve reliability, a pilot study evaluated the use of an interactive computer-based audiovisual donor interviewing system at a typical midwestern blood center in the United States. An interactive video screening system was tested in a community donor center environment on 395 volunteer blood donors. Of the donors using the system, 277 completed surveys regarding their acceptance of and opinions about the system. The study showed that an interactive computer-based audiovisual donor screening system was an effective means of conducting the donor health history. The majority of donors found the system understandable and favored the system over a face-to-face interview. Further, most donors indicated that they would be more likely to return if they were to be screened by such a system. Interactive computer-based audiovisual blood donor screening is useful and well accepted by donors; it may prevent a majority of errors and accidents that are reportable to the FDA; and it may contribute to increased safety and availability of the blood supply.
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
Software development for safety-critical medical applications
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
There are many computer-based medical applications in which safety and not reliability is the overriding concern. Reduced, altered, or no functionality of such systems is acceptable as long as no harm is done. A precise, formal definition of what software safety means is essential, however, before any attempt can be made to achieve it. Without this definition, it is not possible to determine whether a specific software entity is safe. A set of definitions pertaining to software safety will be presented and a case study involving an experimental medical device will be described. Some new techniques aimed at improving software safety will also be discussed.
NASA Technical Reports Server (NTRS)
Reveley, Mary S.
2003-01-01
The goal of the NASA Aviation Safety Program (AvSP) is to develop and demonstrate technologies that contribute to a reduction in the aviation fatal accident rate by a factor of 5 by the year 2007 and by a factor of 10 by the year 2022. Integrated safety analysis of day-to-day operations and risks within those operations will provide an understanding of the Aviation Safety Program portfolio. Safety benefits analyses are currently being conducted. Preliminary results for the Synthetic Vision Systems (SVS) and Weather Accident Prevention (WxAP) projects of the AvSP have been completed by the Logistics Management Institute under a contract with the NASA Glenn Research Center. These analyses include both a reliability analysis and a computer simulation model. The integrated safety analysis method comprises two principal components: a reliability model and a simulation model. In the reliability model, the results indicate how different technologies and systems will perform in normal, degraded, and failed modes of operation. In the simulation, an operational scenario is modeled. The primary purpose of the SVS project is to improve safety by providing visual-flightlike situation awareness during instrument conditions. The current analyses are an estimate of the benefits of SVS in avoiding controlled flight into terrain. The scenario modeled has an aircraft flying directly toward a terrain feature. When the flight crew determines that the aircraft is headed toward an obstruction, the aircraft executes a level turn at speed. The simulation is ended when the aircraft completes the turn.
Hierarchical specification of the SIFT fault tolerant flight control system
NASA Technical Reports Server (NTRS)
Melliar-Smith, P. M.; Schwartz, R. L.
1981-01-01
The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
DOT National Transportation Integrated Search
1976-07-01
The Federal Railroad Administration (FRA) is sponsoring research, development, and demonstration programs to provide improved safety, performance, speed, reliability, and maintainability of rail transportation systems at reduced life-cycle costs. A m...
High-reliability computing for the smarter planet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Graham, Paul; Manuzzato, Andrea
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
Autonomous safety and reliability features of the K-1 avionics system
NASA Astrophysics Data System (ADS)
Mueller, George E.; Kohrs, Dick; Bailey, Richard; Lai, Gary
2004-03-01
Kistler Aerospace Corporation is developing the K-1, a fully reusable, two-stage-to-orbit launch vehicle. Both stages return to the launch site using parachutes and airbags. Initial flight operations will occur from Woomera, Australia. K-1 guidance is performed autonomously. Each stage of the K-1 employs a triplex, fault tolerant avionics architecture, including three fault tolerant computers and three radiation hardened Embedded GPS/INS units with a hardware voter. The K-1 has an Integrated Vehicle Health Management (IVHM) system on each stage residing in the three vehicle computers based on similar systems in commercial aircraft. During first-stage ascent, the IVHM system performs an Instantaneous Impact Prediction (IIP) calculation 25 times per second, initiating an abort in the event the vehicle is outside a predetermined safety corridor for at least 3 consecutive calculations. In this event, commands are issued to terminate thrust, separate the stages, dump all propellant in the first-stage, and initiate a normal landing sequence. The second-stage flight computer calculates its ability to reach orbit along its state vector, initiating an abort sequence similar to the first stage if it cannot. On a nominal mission, following separation, the second-stage also performs calculations to assure its impact point is within a safety corridor. The K-1's guidance and control design is being tested through simulation with hardware-in-the-loop at Draper Laboratory. Kistler's verification strategy assures reliable and safe operation of the K-1.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
NDE: A key to engine rotor life prediction
NASA Technical Reports Server (NTRS)
Doherty, J. E.
1977-01-01
A key ingredient in the establishment of safe life times for critical components is the means of reliably detecting flaws which may potentially exist. Currently used nondestructive evaluation procedures are successful in detecting life limiting defects; however, the development of automated and computer aided NDE technology permits even greater assurance of flight safety.
Probabilistic assessment of dynamic system performance. Part 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belhadj, Mohamed
1993-01-01
Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Computer assisted surgery with 3D robot models and visualisation of the telesurgical action.
Rovetta, A
2000-01-01
This paper deals with the support of virtual reality computer action in the procedures of surgical robotics. Computer support gives a direct representation of the surgical theatre. The modelization of the procedure in course and in development gives a psychological reaction towards safety and reliability. Robots similar to the ones used by the manufacturing industry can be used with little modification as very effective surgical tools. They have high precision, repeatability and are versatile in integrating with the medical instrumentation. Now integrated surgical rooms, with computer and robot-assisted intervention, are operating. The computer is the element for a decision taking aid, and the robot works as a very effective tool.
Addressing Uniqueness and Unison of Reliability and Safety for a Better Integration
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Safie, Fayssal
2016-01-01
Over time, it has been observed that Safety and Reliability have not been clearly differentiated, which leads to confusion, inefficiency, and, sometimes, counter-productive practices in executing each of these two disciplines. It is imperative to address this situation to help Reliability and Safety disciplines improve their effectiveness and efficiency. The paper poses an important question to address, "Safety and Reliability - Are they unique or unisonous?" To answer the question, the paper reviewed several most commonly used analyses from each of the disciplines, namely, FMEA, reliability allocation and prediction, reliability design involvement, system safety hazard analysis, Fault Tree Analysis, and Probabilistic Risk Assessment. The paper pointed out uniqueness and unison of Safety and Reliability in their respective roles, requirements, approaches, and tools, and presented some suggestions for enhancing and improving the individual disciplines, as well as promoting the integration of the two. The paper concludes that Safety and Reliability are unique, but compensating each other in many aspects, and need to be integrated. Particularly, the individual roles of Safety and Reliability need to be differentiated, that is, Safety is to ensure and assure the product meets safety requirements, goals, or desires, and Reliability is to ensure and assure maximum achievability of intended design functions. With the integration of Safety and Reliability, personnel can be shared, tools and analyses have to be integrated, and skill sets can be possessed by the same person with the purpose of providing the best value to a product development.
NASA Technical Reports Server (NTRS)
Atwell, William; Koontz, Steve; Normand, Eugene
2012-01-01
In this paper we review the discovery of cosmic ray effects on the performance and reliability of microelectronic systems as well as on human health and safety, as well as the development of the engineering and health science tools used to evaluate and mitigate cosmic ray effects in earth surface, atmospheric flight, and space flight environments. Three twentieth century technological developments, 1) high altitude commercial and military aircraft; 2) manned and unmanned spacecraft; and 3) increasingly complex and sensitive solid state micro-electronics systems, have driven an ongoing evolution of basic cosmic ray science into a set of practical engineering tools (e.g. ground based test methods as well as high energy particle transport and reaction codes) needed to design, test, and verify the safety and reliability of modern complex electronic systems as well as effects on human health and safety. The effects of primary cosmic ray particles, and secondary particle showers produced by nuclear reactions with spacecraft materials, can determine the design and verification processes (as well as the total dollar cost) for manned and unmanned spacecraft avionics systems. Similar considerations apply to commercial and military aircraft operating at high latitudes and altitudes near the atmospheric Pfotzer maximum. Even ground based computational and controls systems can be negatively affected by secondary particle showers at the Earth's surface, especially if the net target area of the sensitive electronic system components is large. Accumulation of both primary cosmic ray and secondary cosmic ray induced particle shower radiation dose is an important health and safety consideration for commercial or military air crews operating at high altitude/latitude and is also one of the most important factors presently limiting manned space flight operations beyond low-Earth orbit (LEO).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)
2001-01-01
The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.
2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter
2007-08-23
Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust
Inter-Vehicle Communication System Utilizing Autonomous Distributed Transmit Power Control
NASA Astrophysics Data System (ADS)
Hamada, Yuji; Sawa, Yoshitsugu; Goto, Yukio; Kumazawa, Hiroyuki
In ad-hoc network such as inter-vehicle communication (IVC) system, safety applications that vehicles broadcast the information such as car velocity, position and so on periodically are considered. In these applications, if there are many vehicles broadcast data in a communication area, congestion incurs a problem decreasing communication reliability. We propose autonomous distributed transmit power control method to keep high communication reliability. In this method, each vehicle controls its transmit power using feed back control. Furthermore, we design a communication protocol to realize the proposed method, and we evaluate the effectiveness of proposed method using computer simulation.
Light aircraft crash safety program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.
1974-01-01
NASA is embarked upon research and development tasks aimed at providing the general aviation industry with a reliable crashworthy airframe design technology. The goals of the NASA program are: reliable analytical techniques for predicting the nonlinear behavior of structures; significant design improvements of airframes; and simulated full-scale crash test data. The analytical tools will include both simplified procedures for estimating energy absorption characteristics and more complex computer programs for analysis of general airframe structures under crash loading conditions. The analytical techniques being developed both in-house and under contract are described, and a comparison of some analytical predictions with experimental results is shown.
NASA Technical Reports Server (NTRS)
Bekele, Gete
2002-01-01
This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.
Micro Computer Tomography for medical device and pharmaceutical packaging analysis.
Hindelang, Florine; Zurbach, Raphael; Roggo, Yves
2015-04-10
Biomedical device and medicine product manufacturing are long processes facing global competition. As technology evolves with time, the level of quality, safety and reliability increases simultaneously. Micro Computer Tomography (Micro CT) is a tool allowing a deep investigation of products: it can contribute to quality improvement. This article presents the numerous applications of Micro CT for medical device and pharmaceutical packaging analysis. The samples investigated confirmed CT suitability for verification of integrity, measurements and defect detections in a non-destructive manner. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurata, Masaki; Devanathan, Ramaswami
2015-10-13
Free energy and heat capacity of actinide elements and compounds are important properties for the evaluation of the safety and reliable performance of nuclear fuel. They are essential inputs for models that describe complex phenomena that govern the behaviour of actinide compounds during nuclear fuel fabrication and irradiation. This chapter introduces various experimental methods to measure free energy and heat capacity to serve as inputs for models and to validate computer simulations. This is followed by a discussion of computer simulation of these properties, and recent simulations of thermophysical properties of nuclear fuel are briefly reviewed.
NASA/Army Rotorcraft Transmission Research, a Review of Recent Significant Accomplishments
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
1994-01-01
A joint helicopter transmission research program between NASA Lewis Research Center and the U.S. Army Research Lab has existed since 1970. Research goals are to reduce weight and noise while increasing life, reliability, and safety. These research goals are achieved by the NASA/Army Mechanical Systems Technology Branch through both in-house research and cooperative research projects with university and industry partners. Some recent significant technical accomplishments produced by this cooperative research are reviewed. The following research projects are reviewed: oil-off survivability of tapered roller bearings, design and evaluation of high contact ratio gearing, finite element analysis of spiral bevel gears, computer numerical control grinding of spiral bevel gears, gear dynamics code validation, computer program for life and reliability of helicopter transmissions, planetary gear train efficiency study, and the Advanced Rotorcraft Transmission (ART) program.
Vasak, Christoph; Watzak, Georg; Gahleitner, André; Strbac, Georg; Schemper, Michael; Zechner, Werner
2011-10-01
This prospective study was intended to evaluate the overall deviation in a clinical treatment setting to provide for quantification of the potential impairment of treatment safety and reliability with computer-assisted, template-guided transgingival implantation. The patient population enrolled (male/female=10/8) presented with partially dentate and edentulous maxillae and mandibles. Overall, 86 implants were placed by two experienced dental surgeons strictly following the NobelGuide™ protocol for template-guided implantation. All patients had a postoperative computed tomography (CT) with identical settings to the preoperative examination. Using the triple scan technique, pre- and postoperative CT data were merged in the Procera planning software, a newly developed procedure - initially presented in 2007 allowing measurement of the deviations at implant shoulder and apex. The deviations measured were an average of 0.43 mm (bucco-lingual), 0.46 mm (mesio-distal) and 0.53 mm (depth) at the level of the implant shoulder and slightly higher at the implant apex with an average of 0.7 mm (bucco-lingual), 0.63 mm (mesio-distal) and 0.52 mm (depth). The maximum deviation of 2.02 mm was encountered in the corono-apical direction. Significantly lower deviations were seen for implants in the anterior region vs. the posterior tooth region (P<0.01, 0.31 vs. 0.5 mm), and deviations were also significantly lower in the mandible than in the maxilla (P=0.04, 0.36 vs. 0.45 mm) in the mesio-distal direction. Moreover, a significant correlation between deviation and mucosal thickness was seen and a learning effect was found over the time period of performance of the surgical procedures. Template-guided implantation will ensure reliable transfer of preoperative computer-assisted planning into surgical practice. With regard to the required verification of treatment reliability of an implantation system with flapless access, all maximum deviations measured in this clinical study were within the safety margins recommended by the planning software. © 2011 John Wiley & Sons A/S.
Addressing Unison and Uniqueness of Reliability and Safety for Better Integration
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Safie, Fayssal
2015-01-01
For a long time, both in theory and in practice, safety and reliability have not been clearly differentiated, which leads to confusion, inefficiency, and sometime counter-productive practices in executing each of these two disciplines. It is imperative to address the uniqueness and the unison of these two disciplines to help both disciplines become more effective and to promote a better integration of the two for enhancing safety and reliability in our products as an overall objective. There are two purposes of this paper. First, it will investigate the uniqueness and unison of each discipline and discuss the interrelationship between the two for awareness and clarification. Second, after clearly understanding the unique roles and interrelationship between the two in a product design and development life cycle, we offer suggestions to enhance the disciplines with distinguished and focused roles, to better integrate the two, and to improve unique sets of skills and tools of reliability and safety processes. From the uniqueness aspect, the paper identifies and discusses the respective uniqueness of reliability and safety from their roles, accountability, nature of requirements, technical scopes, detailed technical approaches, and analysis boundaries. It is misleading to equate unreliable to unsafe, since a safety hazard may or may not be related to the component, sub-system, or system functions, which are primarily what reliability addresses. Similarly, failing-to-function may or may not lead to hazard events. Examples will be given in the paper from aerospace, defense, and consumer products to illustrate the uniqueness and differences between reliability and safety. From the unison aspect, the paper discusses what the commonalities between reliability and safety are, and how these two disciplines are linked, integrated, and supplemented with each other to accomplish the customer requirements and product goals. In addition to understanding the uniqueness in reliability and safety, a better understanding of unison and commonalities will further help in understanding the interaction between reliability and safety. This paper discusses the unison and uniqueness of reliability and safety. It presents some suggestions for better integration of the two disciplines in terms of technical approaches, tools, techniques, and skills to enhance the role of reliability and safety in supporting a product design and development life cycle. The paper also discusses eliminating the redundant effort and minimizing the overlap of reliability and safety analyses for an efficient implementation of the two disciplines.
Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An
2016-01-01
With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.
Pilots of the future - Human or computer?
NASA Technical Reports Server (NTRS)
Chambers, A. B.; Nagel, D. C.
1985-01-01
In connection with the occurrence of aircraft accidents and the evolution of the air-travel system, questions arise regarding the computer's potential for making fundamental contributions to improving the safety and reliability of air travel. An important result of an analysis of the causes of aircraft accidents is the conclusion that humans - 'pilots and other personnel' - are implicated in well over half of the accidents which occur. Over 70 percent of the incident reports contain evidence of human error. In addition, almost 75 percent show evidence of an 'information-transfer' problem. Thus, the question arises whether improvements in air safety could be achieved by removing humans from control situations. In an attempt to answer this question, it is important to take into account also certain advantages which humans have in comparison to computers. Attention is given to human error and the effects of technology, the motivation to automate, aircraft automation at the crossroads, the evolution of cockpit automation, and pilot factors.
An Automated Safe-to-Mate (ASTM) Tester
NASA Technical Reports Server (NTRS)
Nguyen, Phuc; Scott, Michelle; Leung, Alan; Lin, Michael; Johnson, Thomas
2013-01-01
Safe-to-mate testing is a common hardware safety practice where impedance measurements are made on unpowered hardware to verify isolation, continuity, or impedance between pins of an interface connector. A computer-based instrumentation solution has been developed to resolve issues. The ASTM is connected to the circuit under test, and can then quickly, safely, and reliably safe-to-mate the entire connector, or even multiple connectors, at the same time.
NASA Technical Reports Server (NTRS)
Askew, John C.
1994-01-01
An alternative to the immersion process for the electrodeposition of chromium from aqueous solutions on the inside diameter (ID) of long tubes is described. The Vessel Plating Process eliminates the need for deep processing tanks, large volumes of solutions, and associated safety and environmental concerns. Vessel Plating allows the process to be monitored and controlled by computer thus increasing reliability, flexibility and quality. Elimination of the trivalent chromium accumulation normally associated with ID plating is intrinsic to the Vessel Plating Process. The construction and operation of a prototype Vessel Plating Facility with emphasis on materials of construction, engineered and operational safety and a unique system for rinse water recovery are described.
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
NASA Technical Reports Server (NTRS)
Prosser, Bill
2016-01-01
Advanced nondestructive measurement techniques are critical for ensuring the reliability and safety of NASA spacecraft. Techniques such as infrared thermography, THz imaging, X-ray computed tomography and backscatter X-ray are used to detect indications of damage in spacecraft components and structures. Additionally, sensor and measurement systems are integrated into spacecraft to provide structural health monitoring to detect damaging events that occur during flight such as debris impacts during launch and assent or from micrometeoroid and orbital debris, or excessive loading due to anomalous flight conditions. A number of examples will be provided of how these nondestructive measurement techniques have been applied to resolve safety critical inspection concerns for the Space Shuttle, International Space Station (ISS), and a variety of launch vehicles and unmanned spacecraft.
Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution
NASA Astrophysics Data System (ADS)
Hwang, Inseok
The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
NASA Technical Reports Server (NTRS)
Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard
1987-01-01
A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.
Aerospace Safety Advisory Panel
NASA Technical Reports Server (NTRS)
1992-01-01
The results of the Panel's activities are presented in a set of findings and recommendations. Highlighted here are both improvements in NASA's safety and reliability activities and specific areas where additional gains might be realized. One area of particular concern involves the curtailment or elimination of Space Shuttle safety and reliability enhancements. Several findings and recommendations address this area of concern, reflecting the opinion that safety and reliability enhancements are essential to the continued successful operation of the Space Shuttle. It is recommended that a comprehensive and continuing program of safety and reliability improvements in all areas of Space Shuttle hardware/software be considered an inherent component of ongoing Space Shuttle operations.
Interrelation Between Safety Factors and Reliability
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)
2001-01-01
An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.
Safety, reliability, maintainability and quality provisions for the Space Shuttle program
NASA Technical Reports Server (NTRS)
1990-01-01
This publication establishes common safety, reliability, maintainability and quality provisions for the Space Shuttle Program. NASA Centers shall use this publication both as the basis for negotiating safety, reliability, maintainability and quality requirements with Shuttle Program contractors and as the guideline for conduct of program safety, reliability, maintainability and quality activities at the Centers. Centers shall assure that applicable provisions of the publication are imposed in lower tier contracts. Centers shall give due regard to other Space Shuttle Program planning in order to provide an integrated total Space Shuttle Program activity. In the implementation of safety, reliability, maintainability and quality activities, consideration shall be given to hardware complexity, supplier experience, state of hardware development, unit cost, and hardware use. The approach and methods for contractor implementation shall be described in the contractors safety, reliability, maintainability and quality plans. This publication incorporates provisions of NASA documents: NHB 1700.1 'NASA Safety Manual, Vol. 1'; NHB 5300.4(IA), 'Reliability Program Provisions for Aeronautical and Space System Contractors'; and NHB 5300.4(1B), 'Quality Program Provisions for Aeronautical and Space System Contractors'. It has been tailored from the above documents based on experience in other programs. It is intended that this publication be reviewed and revised, as appropriate, to reflect new experience and to assure continuing viability.
Data systems and computer science: Software Engineering Program
NASA Technical Reports Server (NTRS)
Zygielbaum, Arthur I.
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Morrell, F. R.
1981-01-01
An experimental redundant strapdown inertial measurement unit (RSDIMU) is developed as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit includes four two degree-of-freedom tuned rotor gyros, and four accelerometers in a skewed and separable semioctahedral array. These sensors are coupled to four microprocessors which compensate sensor errors. These microprocessors are interfaced with two flight computers which process failure detection, isolation, redundancy management, and general flight control/navigation algorithms. Since the RSDIMU is a developmental unit, it is imperative that the flight computers provide special visibility and facility in algorithm modification.
Enhancing point of care vigilance using computers.
St Jacques, Paul; Rothman, Brian
2011-09-01
Information technology has the potential to provide a tremendous step forward in perioperative patient safety. Through automated delivery of information through fixed and portable computer resources, clinicians may achieve improved situational awareness of the overall operation of the operating room suite and the state of individual patients in various stages of surgical care. Coupling the raw, but integrated, information with decision support and alerting algorithms enables clinicians to achieve high reliability in documentation compliance and response to care protocols. Future studies and outcomes analysis are needed to quantify the degree of benefit of these new components of perioperative information systems. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lala, J.H.; Nagle, G.A.; Harper, R.E.
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I
1995-01-01
Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.
NASA Technical Reports Server (NTRS)
2001-01-01
This is a listing of recent unclassified RTO technical publications processed by the NASA Center for AeroSpace Information from January 1, 2001 through March 31, 2001 available on the NASA Aeronautics and Space Database. Contents include 1) Cognitive Task Analysis; 2) RTO Educational Notes; 3) The Capability of Virtual Reality to Meet Military Requirements; 4) Aging Engines, Avionics, Subsystems and Helicopters; 5) RTO Meeting Proceedings; 6) RTO Technical Reports; 7) Low Grazing Angle Clutter...; 8) Verification and Validation Data for Computational Unsteady Aerodynamics; 9) Space Observation Technology; 10) The Human Factor in System Reliability...; 11) Flight Control Design...; 12) Commercial Off-the-Shelf Products in Defense Applications.
Improving patient safety: patient-focused, high-reliability team training.
McKeon, Leslie M; Cunningham, Patricia D; Oswaks, Jill S Detty
2009-01-01
Healthcare systems are recognizing "human factor" flaws that result in adverse outcomes. Nurses work around system failures, although increasing healthcare complexity makes this harder to do without risk of error. Aviation and military organizations achieve ultrasafe outcomes through high-reliability practice. We describe how reliability principles were used to teach nurses to improve patient safety at the front line of care. Outcomes include safety-oriented, teamwork communication competency; reflections on safety culture and clinical leadership are discussed.
Koch, Michael S; DeSesso, John M; Williams, Amy Lavin; Michalek, Suzanne; Hammond, Bruce
2016-01-01
To determine the reliability of food safety studies carried out in rodents with genetically modified (GM) crops, a Food Safety Study Reliability Tool (FSSRTool) was adapted from the European Centre for the Validation of Alternative Methods' (ECVAM) ToxRTool. Reliability was defined as the inherent quality of the study with regard to use of standardized testing methodology, full documentation of experimental procedures and results, and the plausibility of the findings. Codex guidelines for GM crop safety evaluations indicate toxicology studies are not needed when comparability of the GM crop to its conventional counterpart has been demonstrated. This guidance notwithstanding, animal feeding studies have routinely been conducted with GM crops, but their conclusions on safety are not always consistent. To accurately evaluate potential risks from GM crops, risk assessors need clearly interpretable results from reliable studies. The development of the FSSRTool, which provides the user with a means of assessing the reliability of a toxicology study to inform risk assessment, is discussed. Its application to the body of literature on GM crop food safety studies demonstrates that reliable studies report no toxicologically relevant differences between rodents fed GM crops or their non-GM comparators.
Computer Aided Method for System Safety and Reliability Assessments
2008-09-01
program between 1998 and 2003. This tool was not marketed in the public domain after the CRV program ended. The other tool is called eXpress, and it...support Government reviewed and approved analyses methodologies which can 5 then be shared with other government agencies and industry partners...Documented for B&R, UP&L, EPRI 30 DEC 80 GO IBM Version Enhanced at UCC , Dallas, Descriptors, Facility to Alter Array Sizes, Explanation of Use 1 SEP 82
2002-07-01
Knowledge From Data .................................................. 25 HIGH-CONFIDENCE SOFTWARE AND SYSTEMS Reliability, Security, and Safety for...NOAA’s Cessna Citation flew over the 16-acre World Trade Center site, scanning with an Optech ALSM unit. The system recorded data points from 33,000...provide the data storage and compute power for intelligence analysis, high-performance national defense systems , and critical scientific research • Large
Combining System Safety and Reliability to Ensure NASA CoNNeCT's Success
NASA Technical Reports Server (NTRS)
Havenhill, Maria; Fernandez, Rene; Zampino, Edward
2012-01-01
Hazard Analysis, Failure Modes and Effects Analysis (FMEA), the Limited-Life Items List (LLIL), and the Single Point Failure (SPF) List were applied by System Safety and Reliability engineers on NASA's Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project. The integrated approach involving cross reviews of these reports by System Safety, Reliability, and Design engineers resulted in the mitigation of all identified hazards. The outcome was that the system met all the safety requirements it was required to meet.
Universal first-order reliability concept applied to semistatic structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Universal first-order reliability concept applied to semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-07-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
NASA Technical Reports Server (NTRS)
Karns, James
1993-01-01
The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.
Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M.; Archer, B.; Hendrickson, B.
2015-08-27
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
Long-term real-time structural health monitoring using wireless smart sensor
NASA Astrophysics Data System (ADS)
Jang, Shinae; Mensah-Bonsu, Priscilla O.; Li, Jingcheng; Dahal, Sushil
2013-04-01
Improving the safety and security of civil infrastructure has become a critical issue for decades since it plays a central role in the economics and politics of a modern society. Structural health monitoring of civil infrastructure using wireless smart sensor network has emerged as a promising solution recently to increase structural reliability, enhance inspection quality, and reduce maintenance costs. Though hardware and software framework are well prepared for wireless smart sensors, the long-term real-time health monitoring strategy are still not available due to the lack of systematic interface. In this paper, the Imote2 smart sensor platform is employed, and a graphical user interface for the long-term real-time structural health monitoring has been developed based on Matlab for the Imote2 platform. This computer-aided engineering platform enables the control, visualization of measured data as well as safety alarm feature based on modal property fluctuation. A new decision making strategy to check the safety is also developed and integrated in this software. Laboratory validation of the computer aided engineering platform for the Imote2 on a truss bridge and a building structure has shown the potential of the interface for long-term real-time structural health monitoring.
The assessment of exploitation process of power for access control system
NASA Astrophysics Data System (ADS)
Wiśnios, Michał; Paś, Jacek
2017-10-01
The safety of public utility facilities is a function not only of effectiveness of the electronic safety systems, used for protection of property and persons, but it also depends on the proper functioning of their power supply systems. The authors of the research paper analysed the power supply systems, which are used in buildings for the access control system that is integrated with the closed-circuit TV. The Access Control System is a set of electronic, electromechanical and electrical devices and the computer software controlling the operation of the above-mentioned elements, which is aimed at identification of people, vehicles allowed to cross the boundary of the reserved area, to prevent from crossing the reserved area and to generate the alarm signal informing about the attempt of crossing by an unauthorised entity. The industrial electricity with appropriate technical parameters is a basis of proper functioning of safety systems. Only the electricity supply to the systems is not equivalent to the operation continuity provision. In practice, redundant power supply systems are used. In the carried out reliability analysis of the power supply system, various power circuits of the system were taken into account. The reliability and operation requirements for this type of system were also included.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
Control System Upgrade for a Mass Property Measurement Facility
NASA Technical Reports Server (NTRS)
Chambers, William; Hinkle, R. Kenneth (Technical Monitor)
2002-01-01
The Mass Property Measurement Facility (MPMF) at the Goddard Space Flight Center has undergone modifications to ensure the safety of Flight Payloads and the measurement facility. The MPMF has been technically updated to improve reliability and increase the accuracy of the measurements. Modifications include the replacement of outdated electronics with a computer based software control system, the addition of a secondary gas supply in case of a catastrophic failure to the gas supply and a motor controlled emergency stopping feature instead of a hard stop.
The evolution of automated launch processing
NASA Technical Reports Server (NTRS)
Tomayko, James E.
1988-01-01
The NASA Launch Processing System (LPS) to which attention is presently given has arrived at satisfactory solutions for the distributed-computing, good user interface and dissimilar-hardware interface, and automation-related problems that emerge in the specific arena of spacecraft launch preparations. An aggressive effort was made to apply the lessons learned in the 1960s, during the first attempts at automatic launch vehicle checkout, to the LPS. As the Space Shuttle System continues to evolve, the primary contributor to safety and reliability will be the LPS.
NASA Astrophysics Data System (ADS)
Bouchpan-Lerust-Juéry, L.
2007-08-01
Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.
Assurance of reliability and safety in liquid hydrocarbons marine transportation and storing
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Polyakov, S. L.; Shunmin, Li
2017-10-01
The problems of assurance of safety and reliability in the liquid hydrocarbons marine transportation and storing are described. The requirements of standard IEC61511 have to be fulfilled for the load/unload in tanker’s system under dynamic loads on the pipeline system. The safety zones for fires of the type “fireball” and the spillage have to be determined when storing the liquid hydrocarbons. An example of the achieved necessary safety level of the duplicated load system, the conditions of the pipelines reliable operation under dynamic loads, the principles of the method of the liquid hydrocarbons storage safety zones under possible accident conditions are represented.
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lin, Shu-Yuan; Tseng, Wei Ting; Hsu, Miao-Ju; Chiang, Hui-Ying; Tseng, Hui-Chen
2017-12-01
To test the psychometric properties of the Chinese version of the Nursing Home Survey on Patient Safety Culture scale among staff in long-term care facilities. The Nursing Home Survey on Patient Safety Culture scale is a standard tool for safety culture assessment in nursing homes. Extending its application to different types of long-term care facilities and varied ethnic populations is worth pursuing. A national random survey. A total of 306 managers and staff completed the Chinese version of the Nursing Home Survey on Patient Safety Culture scale among 30 long-term care facilities in Taiwan. Content validity and construct validity were tested by content validity index (CVI) and principal axis factor analysis (PAF) with Promax rotation. Concurrent validity was tested through correlations between the scale and two overall rating items. Reliability was computed by intraclass correlation coefficient and Cronbach's α coefficients. Statistical analyses such as descriptive, Pearson's and Spearman's rho correlations and PAF were completed. Scale-level and item-level CVIs (0.91-0.98) of the Chinese version of the Nursing Home Survey on Patient Safety Culture scale were satisfactory. Four-factor construct and merged item composition differed from the Nursing Home Survey on Patient Safety Culture scale, and it accounted for 53% of variance. Concurrent validity was evident by existing positive correlations between the scale and two overall ratings of resident safety. Cronbach's α coefficients of the subscales and the Chinese version of the Nursing Home Survey on Patient Safety Culture scale ranged from .76-.94. The Chinese version of the Nursing Home Survey on Patient Safety Culture scale identified essential dimensions to reflect the important features of a patient safety culture in long-term care facilities. The researchers introduced the Chinese version of the Nursing Home Survey on Patient Safety Culture for safety culture assessment in long-term care facilities, but further testing of the reliability of the scale in a large Chinese sample and in different long-term care facilities was recommended. The Chinese version of the Nursing Home Survey on Patient Safety Culture scale was developed to increase the users' intention towards safety culture assessment. It can identify areas for improvement, understand safety culture changes over time and evaluate the effectiveness of interventions. © 2017 John Wiley & Sons Ltd.
Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.
Adaptations of advanced safety and reliability techniques to petroleum and other industries
NASA Technical Reports Server (NTRS)
Purser, P. E.
1974-01-01
The underlying philosophy of the general approach to failure reduction and control is presented. Safety and reliability management techniques developed in the industries which have participated in the U.S. space and defense programs are described along with adaptations to nonaerospace activities. The examples given illustrate the scope of applicability of these techniques. It is indicated that any activity treated as a 'system' is a potential user of aerospace safety and reliability management techniques.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... including cybersecurity best practices, media security and reliability best practices, transition to Next... Cybersecurity and Communications Reliability Public Safety and Homeland Security Bureau, Federal Communications... Chief for Cybersecurity and Communications Reliability. [FR Doc. 2011-4211 Filed 2-23-11; 8:45 am...
Magrabi, Farah; Ong, Mei-Sing; Runciman, William; Coiera, Enrico
2010-01-01
To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify 'natural categories' for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Descriptive statistics; inter-rater reliability. A search of 42,616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human-computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human-computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
Implementing Software Safety in the NASA Environment
NASA Technical Reports Server (NTRS)
Wetherholt, Martha S.; Radley, Charles F.
1994-01-01
Until recently, NASA did not consider allowing computers total control of flight systems. Human operators, via hardware, have constituted the ultimate safety control. In an attempt to reduce costs, NASA has come to rely more and more heavily on computers and software to control space missions. (For example. software is now planned to control most of the operational functions of the International Space Station.) Thus the need for systematic software safety programs has become crucial for mission success. Concurrent engineering principles dictate that safety should be designed into software up front, not tested into the software after the fact. 'Cost of Quality' studies have statistics and metrics to prove the value of building quality and safety into the development cycle. Unfortunately, most software engineers are not familiar with designing for safety, and most safety engineers are not software experts. Software written to specifications which have not been safety analyzed is a major source of computer related accidents. Safer software is achieved step by step throughout the system and software life cycle. It is a process that includes requirements definition, hazard analyses, formal software inspections, safety analyses, testing, and maintenance. The greatest emphasis is placed on clearly and completely defining system and software requirements, including safety and reliability requirements. Unfortunately, development and review of requirements are the weakest link in the process. While some of the more academic methods, e.g. mathematical models, may help bring about safer software, this paper proposes the use of currently approved software methodologies, and sound software and assurance practices to show how, to a large degree, safety can be designed into software from the start. NASA's approach today is to first conduct a preliminary system hazard analysis (PHA) during the concept and planning phase of a project. This determines the overall hazard potential of the system to be built. Shortly thereafter, as the system requirements are being defined, the second iteration of hazard analyses takes place, the systems hazard analysis (SHA). During the systems requirements phase, decisions are made as to what functions of the system will be the responsibility of software. This is the most critical time to affect the safety of the software. From this point, software safety analyses as well as software engineering practices are the main focus for assuring safe software. While many of the steps proposed in this paper seem like just sound engineering practices, they are the best technical and most cost effective means to assure safe software within a safe system.
NASA Technical Reports Server (NTRS)
Miller, James; Leggett, Jay; Kramer-White, Julie
2008-01-01
A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy.
Computer calculation of device, circuit, equipment, and system reliability.
NASA Technical Reports Server (NTRS)
Crosby, D. R.
1972-01-01
A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.
Patient safety in anesthesia: learning from the culture of high-reliability organizations.
Wright, Suzanne M
2015-03-01
There has been an increased awareness of and interest in patient safety and improved outcomes, as well as a growing body of evidence substantiating medical error as a leading cause of death and injury in the United States. According to The Joint Commission, US hospitals demonstrate improvements in health care quality and patient safety. Although this progress is encouraging, much room for improvement remains. High-reliability organizations, industries that deliver reliable performances in the face of complex working environments, can serve as models of safety for our health care system until plausible explanations for patient harm are better understood. Copyright © 2015 Elsevier Inc. All rights reserved.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Theories of risk and safety: what is their relevance to nursing?
Cooke, Hannah
2009-03-01
The aim of this paper is to review key theories of risk and safety and their implications for nursing. The concept of of patient safety has only recently risen to prominence as an organising principle in healthcare. The paper considers the wider social context in which contemporary concepts of risk and safety have developed. In particular it looks at sociological debates about the rise of risk culture and the risk society and their influence on the patient safety movement. The paper discusses three bodies of theory which have attempted to explain the management of risk and safety in organisations: normal accident theory, high reliability theory, and grid-group cultural theory. It examine debates between these theories and their implications for healthcare. It discusses reasons for the dominance of high reliability theory in healthcare and its strengths and limitations. The paper suggest that high reliability theory has particular difficulties in explaining some aspects of organisational culture. It also suggest that the implementation of high reliability theory in healthcare has involved over reliance on numerical indicators. It suggests that patient safety could be improved by openness to a wider range of theoretical perspectives.
Structural Deterministic Safety Factors Selection Criteria and Verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
A study of software standards used in the avionics industry
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1994-01-01
Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.
Tomita, Machiko R; Saharan, Sumandeep; Rajendran, Sheela; Nochajski, Susan M; Schweitzer, Jo A
2014-01-01
OBJECTIVE. To identify psychometric properties of the Home Safety Self-Assessment Tool (HSSAT) to prevent falls in community-dwelling older adults. METHOD. We tested content validity, test-retest reliability, interrater reliability, construct validity, convergent and discriminant validity, and responsiveness to change. RESULTS. The content validity index was .98, the intraclass correlation coefficient for test-retest reliability was .97, and the interrater reliability was .89. The difference on identified risk factors between the use and nonuse of the HSSAT was significant (p = .005). Convergent validity with the Centers for Disease Control and Prevention Home Safety Checklist was high (r = .65), and discriminant validity with fear of falling was very low (r = .10). The responsiveness to change was moderate (standardized response mean = 0.57). CONCLUSION. The HSSAT is a reliable and valid instrument to identify fall risks in a home environment, and the HSSAT booklet is effective as educational material leading to improvement in home safety. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
NASA Technical Reports Server (NTRS)
Vesely, William E.; Colon, Alfredo E.
2010-01-01
Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
A new computational method for reacting hypersonic flows
NASA Astrophysics Data System (ADS)
Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.
2017-07-01
Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.
Oubaid, V; Anheuser, P
2014-05-01
Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.
ASC FY17 Implementation Plan, Rev. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, P. G.
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less
Impacts: NIST Building and Fire Research Laboratory (technical and societal)
NASA Astrophysics Data System (ADS)
Raufaste, N. J.
1993-08-01
The Building and Fire Research Laboratory (BFRL) of the National Institute of Standards and Technology (NIST) is dedicated to the life cycle quality of constructed facilities. The report describes major effects of BFRL's program on building and fire research. Contents of the document include: structural reliability; nondestructive testing of concrete; structural failure investigations; seismic design and construction standards; rehabilitation codes and standards; alternative refrigerants research; HVAC simulation models; thermal insulation; residential equipment energy efficiency; residential plumbing standards; computer image evaluation of building materials; corrosion-protection for reinforcing steel; prediction of the service lives of building materials; quality of construction materials laboratory testing; roofing standards; simulating fires with computers; fire safety evaluation system; fire investigations; soot formation and evolution; cone calorimeter development; smoke detector standards; standard for the flammability of children's sleepwear; smoldering insulation fires; wood heating safety research; in-place testing of concrete; communication protocols for building automation and control systems; computer simulation of the properties of concrete and other porous materials; cigarette-induced furniture fires; carbon monoxide formation in enclosure fires; halon alternative fire extinguishing agents; turbulent mixing research; materials fire research; furniture flammability testing; standard for the cigarette ignition resistance of mattresses; support of navy firefighter trainer program; and using fire to clean up oil spills.
Zhu, Junya; Li, Liping; Zhao, Hailei; Han, Guangshu; Wu, Albert W; Weingart, Saul N
2014-10-01
Existing patient safety climate instruments, most of which have been developed in the USA, may not accurately reflect the conditions in the healthcare systems of other countries. To develop and evaluate a patient safety climate instrument for healthcare workers in Chinese hospitals. Based on a review of existing instruments, expert panel review, focus groups and cognitive interviews, we developed items relevant to patient safety climate in Chinese hospitals. The draft instrument was distributed to 1700 hospital workers from 54 units in six hospitals in five Chinese cities between July and October 2011, and 1464 completed surveys were received. We performed exploratory and confirmatory factor analyses and estimated internal consistency reliability, within-unit agreement, between-unit variation, unit-mean reliability, correlation between multi-item composites, and association between the composites and two single items of perceived safety. The final instrument included 34 items organised into nine composites: institutional commitment to safety, unit management support for safety, organisational learning, safety system, adequacy of safety arrangements, error reporting, communication and peer support, teamwork and staffing. All composites had acceptable unit-mean reliabilities (≥0.74) and within-unit agreement (Rwg ≥0.71), and exhibited significant between-unit variation with intraclass correlation coefficients ranging from 9% to 21%. Internal consistency reliabilities ranged from 0.59 to 0.88 and were ≥0.70 for eight of the nine composites. Correlations between composites ranged from 0.27 to 0.73. All composites were positively and significantly associated with the two perceived safety items. The Chinese Hospital Survey on Patient Safety Climate demonstrates adequate dimensionality, reliability and validity. The integration of qualitative and quantitative methods is essential to produce an instrument that is culturally appropriate for Chinese hospitals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
2011-05-01
As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less
NASA Technical Reports Server (NTRS)
1982-01-01
Shuttle's propellant measurement system is produced by Simmonds Precision. Company has extensive experience in fuel management systems and other equipment for military and commercial aircraft. A separate corporate entity, Industrial Controls Division was formed due to a number of non-aerospace spinoffs. One example is a "custody transfer" system for measuring and monitoring liquefied natural gas (LNG). LNG is transported aboard large tankers at minus 260 degrees Fahrenheit. Value of a single shipload may reach $15 million. Precision's LNG measurement and monitoring system aids accurate financial accounting and enhances crew safety. Custody transfer systems have been provided for 10 LNG tankers, built by Owing Shipbuilding. Simmonds also provided measurement systems for several liquefied petroleum gas (LPG) production and storage installations. Another spinoff developed by Simmonds Precision is an advanced ignition system for industrial boilers that offers savings of millions of gallons of fuel, and a computer based monitoring and control system for improving safety and reliability in electrical utility applications. Simmonds produces a line of safety systems for nuclear and non-nuclear electrical power plants.
Design for Reliability and Safety Approach for the New NASA Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new launch vehicles, the ARES I and ARES V. Specifically, the paper addresses the use of an integrated probabilistic functional analysis to support the design analysis cycle and a probabilistic risk assessment (PRA) to support the preliminary design and beyond.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
A System for Integrated Reliability and Safety Analyses
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles
1999-01-01
We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.
Ong, Mei-Sing; Runciman, William; Coiera, Enrico
2010-01-01
Objective To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Design Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify ‘natural categories’ for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Measurements Descriptive statistics; inter-rater reliability. Results A search of 42 616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human–computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human–computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Conclusion Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions. PMID:20962128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadwallader, L.C.
1997-03-01
This report presents safety information about powered industrial trucks. The basic lift truck, the counterbalanced sit down rider truck, is the primary focus of the report. Lift truck engineering is briefly described, then a hazard analysis is performed on the lift truck. Case histories and accident statistics are also given. Rules and regulations about lift trucks, such as the US Occupational Safety an Health Administration laws and the Underwriter`s Laboratories standards, are discussed. Safety issues with lift trucks are reviewed, and lift truck safety and reliability are discussed. Some quantitative reliability values are given.
Infusing Reliability Techniques into Software Safety Analysis
NASA Technical Reports Server (NTRS)
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
High-Reliability Health Care: Getting There from Here
Chassin, Mark R; Loeb, Jerod M
2013-01-01
Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. PMID:24028696
High-reliability health care: getting there from here.
Chassin, Mark R; Loeb, Jerod M
2013-09-01
Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer "project fatigue" because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals' readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. © 2013 The Authors. The Milbank Quarterly published by Wiley Periodicals Inc. on behalf of Milbank Memorial Fund.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Software Design Improvements. Part 1; Software Benefits and Limitations
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?
Using supercomputers for the time history analysis of old gravity dams
NASA Astrophysics Data System (ADS)
Rouve, G.; Peters, A.
Some of the old masonry dams that were built in Germany at the beginning of this century are a matter of concern today. In the course of time certain deterioration caused or amplified by aging has appeared and raised questions about the safety of these old dams. The Finite Element Method, which in the past two decades has found a widespread application, offers a suitable tool to re-evaluate the safety of these old gravity dams. The reliability of the results, however, strongly depends on the knowledge of the material parameters. Using historical records and observations a numerical back-analysis models has been developed to simulate the behaviour of these old masonry structures and to estimate their material properties by calibration. Only an implementation on a fourth generation vector computer made the application of this large model possible in practice.
Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio
2016-01-01
In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080
NASA Astrophysics Data System (ADS)
Schoitsch, Erwin
1988-07-01
Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.
Reliability and Maintainability Engineering - A Major Driver for Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2011-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of an effort to design and build a safe and affordable heavy lift vehicle to go to the moon and beyond. To achieve that, NASA is seeking more innovative and efficient approaches to reduce cost while maintaining an acceptable level of safety and mission success. One area that has the potential to contribute significantly to achieving NASA safety and affordability goals is Reliability and Maintainability (R&M) engineering. Inadequate reliability or failure of critical safety items may directly jeopardize the safety of the user(s) and result in a loss of life. Inadequate reliability of equipment may directly jeopardize mission success. Systems designed to be more reliable (fewer failures) and maintainable (fewer resources needed) can lower the total life cycle cost. The Department of Defense (DOD) and industry experience has shown that optimized and adequate levels of R&M are critical for achieving a high level of safety and mission success, and low sustainment cost. Also, lessons learned from the Space Shuttle program clearly demonstrated the importance of R&M engineering in designing and operating safe and affordable launch systems. The Challenger and Columbia accidents are examples of the severe impact of design unreliability and process induced failures on system safety and mission success. These accidents demonstrated the criticality of reliability engineering in understanding component failure mechanisms and integrated system failures across the system elements interfaces. Experience from the shuttle program also shows that insufficient Reliability, Maintainability, and Supportability (RMS) engineering analyses upfront in the design phase can significantly increase the sustainment cost and, thereby, the total life cycle cost. Emphasis on RMS during the design phase is critical for identifying the design features and characteristics needed for time efficient processing, improved operational availability, and optimized maintenance and logistic support infrastructure. This paper discusses the role of R&M in a program acquisition phase and the potential impact of R&M on safety, mission success, operational availability, and affordability. This includes discussion of the R&M elements that need to be addressed and the R&M analyses that need to be performed in order to support a safe and affordable system design. The paper also provides some lessons learned from the Space Shuttle program on the impact of R&M on safety and affordability.
On Some Methods in Safety Evaluation in Geotechnics
NASA Astrophysics Data System (ADS)
Puła, Wojciech; Zaskórski, Łukasz
2015-06-01
The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.
A forward view on reliable computers for flight control
NASA Technical Reports Server (NTRS)
Goldberg, J.; Wensley, J. H.
1976-01-01
The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.
Columbus safety and reliability
NASA Astrophysics Data System (ADS)
Longhurst, F.; Wessels, H.
1988-10-01
Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.
Jalayer, Mohammad; Zhou, Huaguo
2016-08-01
The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Bayesian approach to reliability and confidence
NASA Technical Reports Server (NTRS)
Barnes, Ron
1989-01-01
The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.
Innovative safety valve selection techniques and data.
Miller, Curt; Bredemyer, Lindsey
2007-04-11
The new valve data resources and modeling tools that are available today are instrumental in verifying that that safety levels are being met in both current installations and project designs. If the new ISA 84 functional safety practices are followed closely, good industry validated data used, and a user's maintenance integrity program strictly enforced, plants should feel confident that their design has been quantitatively reinforced. After 2 years of exhaustive reliability studies, there are now techniques and data available to support this safety system component deficiency. Everyone who has gone through the process of safety integrity level (SIL) verification (i.e. reliability math) will appreciate the progress made in this area. The benefits of these advancements are improved safety with lower lifecycle costs such as lower capital investment and/or longer testing intervals. This discussion will start with a review of the different valve, actuator, and solenoid/positioner combinations that can be used and their associated application restraints. Failure rate reliability studies (i.e. FMEDA) and data associated with the final combinations will then discussed. Finally, the impact of the selections on each safety system's SIL verification will be reviewed.
Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control
NASA Technical Reports Server (NTRS)
Schumann, Johann; Mbaya, Timmy; Menghoel, Ole
2011-01-01
Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.
Hutchinson, A; Cooper, K L; Dean, J E; McIntosh, A; Patterson, M; Stride, C B; Laurence, B E; Smith, C M
2006-10-01
To explore the factor structure, reliability, and potential usefulness of a patient safety climate questionnaire in UK health care. Four acute hospital trusts and nine primary care trusts in England. The questionnaire used was the 27 item Teamwork and Safety Climate Survey. Thirty three healthcare staff commented on the wording and relevance. The questionnaire was then sent to 3650 staff within the 13 NHS trusts, seeking to achieve at least 600 responses as the basis for the factor analysis. 1307 questionnaires were returned (36% response). Factor analyses and reliability analyses were carried out on 897 responses from staff involved in direct patient care, to explore how consistently the questions measured the underlying constructs of safety climate and teamwork. Some questionnaire items related to multiple factors or did not relate strongly to any factor. Five items were discarded. Two teamwork factors were derived from the remaining 11 teamwork items and three safety climate factors were derived from the remaining 11 safety items. Internal consistency reliabilities were satisfactory to good (Cronbach's alpha > or =0.69 for all five factors). This is one of the few studies to undertake a detailed evaluation of a patient safety climate questionnaire in UK health care and possibly the first to do so in primary as well as secondary care. The results indicate that a 22 item version of this safety climate questionnaire is useable as a research instrument in both settings, but also demonstrates a more general need for thorough validation of safety climate questionnaires before widespread usage.
Evaluation of power system security and development of transmission pricing method
NASA Astrophysics Data System (ADS)
Kim, Hyungchul
The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.
Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent
Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less
Licheri, Luca; Erriu, Matteo; Bryant, Vincenzo; Piras, Vincenzo
2016-01-01
To evaluate current level of safety under the care of an escort following intravenous sedation, post-sedation arrangements and to identify potential risk levels. Information and post-sedation arrangements are important to patients'safety following surgery but although there is a general consensus over what is recommended for patients and their escorts, there is little, if any, literature on the escorts' awareness of sedation and accordance to post-sedation arrangement and recommendations. Escorts of 113 consecutive patients treated in oral surgery under sedation (midazolam) completed a questionnaire composed of 27 questions divided into seven sections including demographics, awareness of sedation, source of information and post-operative arrangement. From the data collected, two scores were calculated representative of the escorts' Safety and Reliability. Data were then analysed by ANOVA. Safety scores were statistically correlated with instruction source while Reliability correlated to a wider variety of parameters including gender, age as well as information source. Provision of clear written information to escorts is recommended as likely to improve patients' safety. Assessment of escorts' Safety and Reliability could provide a means for improving quality and safety of sedation service.
Study on safety level of RC beam bridges under earthquake
NASA Astrophysics Data System (ADS)
Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia
2017-08-01
This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Gold, Michael R; Kanal, Emanuel; Schwitter, Juerg; Sommer, Torsten; Yoon, Hyun; Ellingson, Michael; Landborg, Lynn; Bratten, Tara
2015-03-01
Many patients with an implantable cardioverter-defibrillator (ICD) have indications for magnetic resonance imaging (MRI). However, MRI is generally contraindicated in ICD patients because of potential risks from hazardous interactions between the MRI and ICD system. The purpose of this study was to use preclinical computer modeling, animal studies, and bench and scanner testing to demonstrate the safety of an ICD system developed for 1.5-T whole-body MRI. MRI hazards were assessed and mitigated using multiple approaches: design decisions to increase safety and reliability, modeling and simulation to quantify clinical MRI exposure levels, animal studies to quantify the physiologic effects of MRI exposure, and bench testing to evaluate safety margin. Modeling estimated the incidence of a chronic change in pacing capture threshold >0.5 V and 1.0 V to be less than 1 in 160,000 and less than 1 in 1,000,000 cases, respectively. Modeling also estimated the incidence of unintended cardiac stimulation to occur in less than 1 in 1,000,000 cases. Animal studies demonstrated no delay in ventricular fibrillation detection and no reduction in ventricular fibrillation amplitude at clinical MRI exposure levels, even with multiple exposures. Bench and scanner testing demonstrated performance and safety against all other MRI-induced hazards. A preclinical strategy that includes comprehensive computer modeling, animal studies, and bench and scanner testing predicts that an ICD system developed for the magnetic resonance environment is safe and poses very low risks when exposed to 1.5-T normal operating mode whole-body MRI. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Study of structural reliability of existing concrete structures
NASA Astrophysics Data System (ADS)
Druķis, P.; Gaile, L.; Valtere, K.; Pakrastiņš, L.; Goremikins, V.
2017-10-01
Structural reliability of buildings has become an important issue after the collapse of a shopping center in Riga 21.11.2013, caused the death of 54 people. The reliability of a building is the practice of designing, constructing, operating, maintaining and removing buildings in ways that ensure maintained health, ward suffered injuries or death due to use of the building. Evaluation and improvement of existing buildings is becoming more and more important. For a large part of existing buildings, the design life has been reached or will be reached in the near future. The structures of these buildings need to be reassessed in order to find out whether the safety requirements are met. The safety requirements provided by the Eurocodes are a starting point for the assessment of safety. However, it would be uneconomical to require all existing buildings and structures to comply fully with these new codes and corresponding safety levels, therefore the assessment of existing buildings differs with each design situation. This case study describes the simple and practical procedure of determination of minimal reliability index β of existing concrete structures designed by different codes than Eurocodes and allows to reassess the actual reliability level of different structural elements of existing buildings under design load.
Oyeyemi, Adewale L; Kasoma, Sandra S; Onywera, Vincent O; Assah, Felix; Adedoyin, Rufus A; Conway, Terry L; Moss, Sarah J; Ocansey, Reginald; Kolbe-Alexander, Tracy L; Akinroye, Kingsley K; Prista, Antonio; Larouche, Richard; Gavand, Kavita A; Cain, Kelli L; Lambert, Estelle V; Aryeetey, Richmond; Bartels, Clare; Tremblay, Mark S; Sallis, James F
2016-03-08
Built environment and policy interventions are effective strategies for controlling the growing worldwide deaths from physical inactivity-related non-communicable diseases. To improve built environment research and develop African specific evidence, it is important to first tailor built environment measures to African contexts and assess their psychometric properties across African countries. This study reports on the adaptation and test-retest reliability of the Neighborhood Environment Walkability Scale in seven sub-Saharan African countries (NEWS-Africa). The original NEWS comprising 8 subscales measuring reported physical and social attributes of neighborhood environments was systematically adapted for Africa through extensive input from physical activity and public health researchers, built environment professionals, and residents in seven African countries: Cameroon, Ghana, Kenya, Mozambique, Nigeria, South Africa and Uganda. Cognitive testing of NEWS-Africa was conducted among diverse residents (N = 109, 50 youth [12 - 17 years] and 59 adults [22 - 67 years], 69 % from low socioeconomic status [SES] neighborhoods). NEWS-Africa was translated into local languages and evaluated for 2-week test-retest reliability in adult participants (N = 301; female = 50.2 %; age = 32.3 ± 12.9 years) purposively recruited from neighborhoods varying in walkability (high and low walkable) and SES (high and low income) and from villages in six of seven participating countries. The original 67 NEWS items was expanded to 89 scores (76 individual NEWS items and 13 computed scales). Several modifications were made to individual items, and some new items were added to capture important attributes in the African environment. A new scale on personal safety was created, and the aesthetics scale was enlarged to reflect African specific characteristics. Over 95 % of all NEWS-Africa scores (items plus computed scales) demonstrated evidence of "excellent" (ICCs > .75 %) or "good" (ICCs = 0.60 to 0.74) reliability. Seven (53.8 %) of the 13 computed NEWS scales demonstrated "excellent" agreement and the other six had "good" agreement. No items or scales demonstrated "poor" reliability (ICCs < .40). The systematic adaptation and initial psychometric evaluation of NEWS-Africa indicates the instrument is feasible and reliable for use with adults of diverse demographic characteristics in Africa. The measure is likely to be useful for research, surveillance of built environment conditions for planning purposes, and to evaluate physical activity and policy interventions in Africa.
Design for Reliability and Safety Approach for the NASA New Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal, M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Reliable computation from contextual correlations
NASA Astrophysics Data System (ADS)
Oestereich, André L.; Galvão, Ernesto F.
2017-12-01
An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.
Evaluation of Human Reliability in Selected Activities in the Railway Industry
NASA Astrophysics Data System (ADS)
Sujová, Erika; Čierna, Helena; Molenda, Michał
2016-09-01
The article focuses on evaluation of human reliability in the human - machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher`s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors` work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher's risk factors that affect his/her work and the evaluated seriousness of its influence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of unclear and complicated internal regulations and work processes, a feeling of being overworked, fear for one's safety at small, insufficiently protected stations.
Ausserhofer, Dietmar; Anderson, Ruth A; Colón-Emeric, Cathleen; Schwendimann, René
2013-08-01
The Safety Organizing Scale is a valid and reliable measure on safety behaviors and practices in hospitals. This study aimed to explore the psychometric properties of the Safety Organizing Scale-Nursing Home version (SOS-NH). In a cross-sectional analysis of staff survey data, we examined validity and reliability of the 9-item Safety SOS-NH using American Educational Research Association guidelines. This substudy of a larger trial used baseline survey data collected from staff members (n = 627) in a variety of work roles in 13 nursing homes (NHs) in North Carolina and Virginia. Psychometric evaluation of the SOS-NH revealed good response patterns with low average of missing values across all items (3.05%). Analyses of the SOS-NH's internal structure (eg, comparative fit indices = 0.929, standardized root mean square error of approximation = 0.045) and consistency (composite reliability = 0.94) suggested its 1-dimensionality. Significant between-facility variability, intraclass correlations, within-group agreement, and design effect confirmed appropriateness of the SOS-NH for measurement at the NH level, justifying data aggregation. The SOS-NH showed discriminate validity from one related concept: communication openness. Initial evidence regarding validity and reliability of the SOS-NH supports its utility in measuring safety behaviors and practices among a wide range of NH staff members, including those with low literacy. Further psychometric evaluation should focus on testing concurrent and criterion validity, using resident outcome measures (eg, patient fall rates). Copyright © 2013 American Medical Directors Association, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.
2015-01-01
NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.
Care 3 model overview and user's guide, first revision
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Petersen, P. L.
1985-01-01
A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.
Validity of instruments to assess students' travel and pedestrian safety.
Mendoza, Jason A; Watson, Kathy; Baranowski, Tom; Nicklas, Theresa A; Uscanga, Doris K; Hanfling, Marcus J
2010-05-18
Safe Routes to School (SRTS) programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI) and intraclass correlations (ICC) across research assistant ratings. The SRTS travel survey had high test-retest reliability (kappa = 0.97, n = 96, p < 0.001) and convergent validity (kappa = 0.87, n = 81, p < 0.001). The pedestrian safety behavior checklist had moderate reliability across research assistants' ratings (ICC = 0.48) and moderate correlation with the PI (r = 0.55, p = < 0.01). When two raters simultaneously used the instrument, the ICC increased to 0.65. Overall percent agreement (91%), sensitivity (85%) and specificity (83%) were acceptable. These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work.
Design of high reliability organizations in health care.
Carroll, J S; Rudolph, J W
2006-12-01
To improve safety performance, many healthcare organizations have sought to emulate high reliability organizations from industries such as nuclear power, chemical processing, and military operations. We outline high reliability design principles for healthcare organizations including both the formal structures and the informal practices that complement those structures. A stage model of organizational structures and practices, moving from local autonomy to formal controls to open inquiry to deep self-understanding, is used to illustrate typical challenges and design possibilities at each stage. We suggest how organizations can use the concepts and examples presented to increase their capacity to self-design for safety and reliability.
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...
2017-05-23
The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piro, M.H.A; Wassermann, F.; Grundmann, S.
The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... reasons of safety, reliability and generally applicable engineering purposes. (b) Requests for access to a... and information relate to a denial of access for reasons of lack of capacity, safety, reliability or engineering standards. (c) A utility shall provide a cable television system operator or telecommunications...
ERIC Educational Resources Information Center
Ramalhoto, M. F.
1999-01-01
Introduces a special theme journal issue on research and education in quality control, maintenance, reliability, risk analysis, and safety. Discusses each of these theme concepts and their applications to naval architecture, marine engineering, and industrial engineering. Considers the effects of the rapid transfer of research results through…
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Daniel, Charles; Kalia, Prince; Smith, Charles A. (Technical Monitor)
2002-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a 10-year Second Generation Reusable Launch Vehicle (RLV) program to improve its space transportation capabilities for both cargo and crewed missions. The objectives of the program are to: significantly increase safety and reliability, reduce the cost of accessing low-earth orbit, attempt to leverage commercial launch capabilities, and provide a growth path for manned space exploration. The safety, reliability and life cycle cost of the next generation vehicles are major concerns, and NASA aims to achieve orders of magnitude improvement in these areas. To get these significant improvements, requires a rigorous process that addresses Reliability, Maintainability and Supportability (RMS) and safety through all the phases of the life cycle of the program. This paper discusses the RMS process being implemented for the Second Generation RLV program.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.
Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein
2015-12-01
Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.
Computational Exposure Science: An Emerging Discipline to ...
Background: Computational exposure science represents a frontier of environmental science that is emerging and quickly evolving.Objectives: In this commentary, we define this burgeoning discipline, describe a framework for implementation, and review some key ongoing research elements that are advancing the science with respect to exposure to chemicals in consumer products.Discussion: The fundamental elements of computational exposure science include the development of reliable, computationally efficient predictive exposure models; the identification, acquisition, and application of data to support and evaluate these models; and generation of improved methods for extrapolating across chemicals. We describe our efforts in each of these areas and provide examples that demonstrate both progress and potential.Conclusions: Computational exposure science, linked with comparable efforts in toxicology, is ushering in a new era of risk assessment that greatly expands our ability to evaluate chemical safety and sustainability and to protect public health. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source
Etchegaray, Jason M; Thomas, Eric J
2012-06-01
To examine the reliability and predictive validity of two patient safety culture surveys-Safety Attitudes Questionnaire (SAQ) and Hospital Survey on Patient Safety Culture (HSOPS)-when administered to the same participants. Also to determine the ability to convert HSOPS scores to SAQ scores. Employees working in intensive care units in 12 hospitals within a large hospital system in the southern United States were invited to anonymously complete both safety culture surveys electronically. All safety culture dimensions from both surveys (with the exception of HSOPS's Staffing) had adequate levels of reliability. Three of HSOPS's outcomes-frequency of event reporting, overall perceptions of patient safety, and overall patient safety grade-were significantly correlated with SAQ and HSOPS dimensions of culture at the individual level, with correlations ranging from r=0.41 to 0.65 for the SAQ dimensions and from r=0.22 to 0.72 for the HSOPS dimensions. Neither the SAQ dimensions nor the HSOPS dimensions predicted the fourth HSOPS outcome-number of events reported within the last 12 months. Regression analyses indicated that HSOPS safety culture dimensions were the best predictors of frequency of event reporting and overall perceptions of patient safety while SAQ and HSOPS dimensions both predicted patient safety grade. Unit-level analyses were not conducted because indices did not indicate that aggregation was appropriate. Scores were converted between the surveys, although much variance remained unexplained. Given that the SAQ and HSOPS had similar reliability and predictive validity, investigators and quality and safety leaders should consider survey length, content, sensitivity to change and the ability to benchmark when selecting a patient safety culture survey.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
A Framework for Reliability and Safety Analysis of Complex Space Missions
NASA Technical Reports Server (NTRS)
Evans, John W.; Groen, Frank; Wang, Lui; Austin, Rebekah; Witulski, Art; Mahadevan, Nagabhushan; Cornford, Steven L.; Feather, Martin S.; Lindsey, Nancy
2017-01-01
Long duration and complex mission scenarios are characteristics of NASA's human exploration of Mars, and will provide unprecedented challenges. Systems reliability and safety will become increasingly demanding and management of uncertainty will be increasingly important. NASA's current pioneering strategy recognizes and relies upon assurance of crew and asset safety. In this regard, flexibility to develop and innovate in the emergence of new design environments and methodologies, encompassing modeling of complex systems, is essential to meet the challenges.
Life prediction and reliability assessment of lithium secondary batteries
NASA Astrophysics Data System (ADS)
Eom, Seung-Wook; Kim, Min-Kyu; Kim, Ick-Jun; Moon, Seong-In; Sun, Yang-Kook; Kim, Hyun-Soo
Reliability assessment of lithium secondary batteries was mainly considered. Shape parameter (β) and scale parameter (η) were calculated from experimental data based on cycle life test. We also examined safety characteristics of lithium secondary batteries. As proposed by IEC 62133 (2002), we had performed all of the safety/abuse tests such as 'mechanical abuse tests', 'environmental abuse tests', 'electrical abuse tests'. This paper describes the cycle life of lithium secondary batteries, FMEA (failure modes and effects analysis) and the safety/abuse tests we had performed.
Reliability of Computer Systems ODRA 1305 and R-32,
1983-03-25
RELIABILITY OF COMPUTER SYSTEMS ODRA 1305 AND R-32 By: Wit Drewniak English pages: 12 Source: Informatyka , Vol. 14, Nr. 7, 1979, pp. 5-8 Country of...JS EMC computers installed in ZETO, Katowice", Informatyka , No. 7-8/78, deals with various reliability classes * within the family of the machines of
Baker, Nancy A; Cook, James R; Redfern, Mark S
2009-01-01
This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.
Radiocardiography in clinical cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, R.N. Jr.; Alam, S.; Kemp, H.G.
1977-01-01
Quantitative radiocardiography provides a variety of noninvasive measurements of value in cardiology. A gamma camera and computer processing are required for most of these measurements. The advantages of ease, economy, and safety of these procedures are, in part, offset by the complexity of as yet unstandardized methods and incomplete validation of results. The expansion of these techniques will inevitably be rapid. Their careful performance requires, for the moment, a major and perhaps dedicated effort by at least one member of the professional team, if the pitfalls that lead to unrecognized error are to be avoided. We may anticipate more automatedmore » and reliable results with increased experience and validation.« less
Design of high reliability organizations in health care
Carroll, J S; Rudolph, J W
2006-01-01
To improve safety performance, many healthcare organizations have sought to emulate high reliability organizations from industries such as nuclear power, chemical processing, and military operations. We outline high reliability design principles for healthcare organizations including both the formal structures and the informal practices that complement those structures. A stage model of organizational structures and practices, moving from local autonomy to formal controls to open inquiry to deep self‐understanding, is used to illustrate typical challenges and design possibilities at each stage. We suggest how organizations can use the concepts and examples presented to increase their capacity to self‐design for safety and reliability. PMID:17142607
Validity of instruments to assess students' travel and pedestrian safety
2010-01-01
Background Safe Routes to School (SRTS) programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Methods Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI) and intraclass correlations (ICC) across research assistant ratings. Results The SRTS travel survey had high test-retest reliability (κ = 0.97, n = 96, p < 0.001) and convergent validity (κ = 0.87, n = 81, p < 0.001). The pedestrian safety behavior checklist had moderate reliability across research assistants' ratings (ICC = 0.48) and moderate correlation with the PI (r = 0.55, p =< 0.01). When two raters simultaneously used the instrument, the ICC increased to 0.65. Overall percent agreement (91%), sensitivity (85%) and specificity (83%) were acceptable. Conclusions These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work. PMID:20482778
A measurement tool to assess culture change regarding patient safety in hospital obstetrical units.
Kenneth Milne, J; Bendaly, Nicole; Bendaly, Leslie; Worsley, Jill; FitzGerald, John; Nisker, Jeff
2010-06-01
Clinical error in acute care hospitals can only be addressed by developing a culture of safety. We sought to develop a cultural assessment survey (CAS) to assess patient safety culture change in obstetrical units. Interview prompts and a preliminary questionnaire were developed through a literature review of patient safety and "high reliability organizations," followed by interviews with members of the Managing Obstetrical Risk Efficiently (MOREOB) Program of the Society of Obstetricians and Gynaecologists of Canada. Three hundred preliminary questionnaires were mailed, and 21 interviews and 9 focus groups were conducted with the staff of 11 hospital sites participating in the program. To pilot test the CAS, 350 surveys were mailed to staff in participating hospitals, and interviews were conducted with seven nurses and five physicians who had completed the survey. Reliability analysis was conducted on four units that completed the CAS prior to and following the implementation of the first MOREOB module. Nineteen values and 105 behaviours, practices, and perceptions relating to patient safety were identified and included in the preliminary questionnaire, of which 143 of 300 (47.4%) were returned. Among the 220 cultural assessment surveys returned (62.9%), six cultural scales emerged: (1) patient safety as everyone's priority; (2) teamwork; (3) valuing individuals; (4) open communication; (5) learning; and (6) empowering individuals. The reliability analysis found all six scales to have internal reliability (Cronbach alpha), ranging from 0.72 (open communication) to 0.84 (valuing individuals). The CAS developed for this study may enable obstetrical units to assess change in patient safety culture.
Multi-hop routing mechanism for reliable sensor computing.
Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min
2009-01-01
Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.
Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.
2010-01-01
The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339
Safety Learning, Organizational Contradictions and the Dynamics of Safety Practice
ERIC Educational Resources Information Center
Ripamonti, Silvio Carlo; Scaratti, Giuseppe
2015-01-01
Purpose: The purpose of this paper is to explore the enactment of safety routines in a transshipment port. Research on work safety and reliability has largely neglected the role of the workers' knowledge in practice in the enactment of organisational safety. The workers' lack of compliance with safety regulations represents an enduring problem…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
... maps? What are the public safety and homeland security implications of public disclosure of key network... 13-33] Improving 9-1-1 Reliability; Reliability and Continuity of Communications Networks, Including... improve the reliability and resiliency of the Nation's 9-1-1 networks. The Notice of Proposed Rulemaking...
NASA Technical Reports Server (NTRS)
Atwell, William; Koontz, Steve; Normand, Eugene
2012-01-01
Three twentieth century technological developments, 1) high altitude commercial and military aircraft; 2) manned and unmanned spacecraft; and 3) increasingly complex and sensitive solid state micro-electronics systems, have driven an ongoing evolution of basic cosmic ray science into a set of practical engineering tools needed to design, test, and verify the safety and reliability of modern complex technological systems. The effects of primary cosmic ray particles and secondary particle showers produced by nuclear reactions with the atmosphere, can determine the design and verification processes (as well as the total dollar cost) for manned and unmanned spacecraft avionics systems. Similar considerations apply to commercial and military aircraft operating at high latitudes and altitudes near the atmospheric Pfotzer maximum. Even ground based computational and controls systems can be negatively affected by secondary particle showers at the Earth s surface, especially if the net target area of the sensitive electronic system components is large. Finally, accumulation of both primary cosmic ray and secondary cosmic ray induced particle shower radiation dose is an important health and safety consideration for commercial or military air crews operating at high altitude/latitude and is also one of the most important factors presently limiting manned space flight operations beyond low-Earth orbit (LEO). In this paper we review the discovery of cosmic ray effects on the performance and reliability of microelectronic systems as well as human health and the development of the engineering and health science tools used to evaluate and mitigate cosmic ray effects in ground-based atmospheric flight, and space flight environments. Ground test methods applied to microelectronic components and systems are used in combinations with radiation transport and reaction codes to predict the performance of microelectronic systems in their operating environments. Similar radiation transport codes are used to evaluate possible human health effects of cosmic ray exposure, however, the health effects are based on worst-case analysis and extrapolation of a very limited human exposure data base combined with some limited experimental animal data. Finally, the limitations on human space operations beyond low-Earth orbit imposed by long term exposure to galactic cosmic rays are discussed.
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Knudson, Matthew D.; Colby, Mitchell; Tumer, Kagan
2014-01-01
Dynamic flight environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal flight paths. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance
Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples
NASA Astrophysics Data System (ADS)
Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.
2012-12-01
The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.
Sun, B; Li, W Z; Yue, Y; Jiang, C W; Xiao, L Y
2001-11-01
Our newly-designed computer-controlled equipment for delivering volatile anesthetic agent uses the subminiature singlechip processor as the central controlling unit. The variables, such as anesthesia method, anesthetic agent, the volume of respiratory loop, age of patient, sex, height, weight, environment temperature and the grade of ASA are all input from the keyboard. The anesthetic dosage, calculated by the singlechip processor, is converted into the signals controlling the pump to accurately deliver anesthetic agent into respiratory loop. We have designed an electrocircuit for the equipment to detect the status of the pump's operation, so we can assure of the safety and the stability of the equipment. The output precision of the equipment, with a good anti-jamming capability, is 1-2% for high flow anesthesia and 1-5% for closed-circuit anesthesia and its self-detecting working is reliable.
Kilov, Andrea M; Togher, Leanne; Power, Emma
2015-01-01
To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.
[Process design in high-reliability organizations].
Sommer, K-J; Kranz, J; Steffens, J
2014-05-01
Modern medicine is a highly complex service industry in which individual care providers are linked in a complicated network. The complexity and interlinkedness is associated with risks concerning patient safety. Other highly complex industries like commercial aviation have succeeded in maintaining or even increasing its safety levels despite rapidly increasing passenger figures. Standard operating procedures (SOPs), crew resource management (CRM), as well as operational risk evaluation (ORE) are historically developed and trusted parts of a comprehensive and systemic safety program. If medicine wants to follow this quantum leap towards increased patient safety, it must intensively evaluate the results of other high-reliability industries and seek step-by-step implementation after a critical assessment.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
A Step Toward High Reliability: Implementation of a Daily Safety Brief in a Children's Hospital.
Saysana, Michele; McCaskey, Marjorie; Cox, Elaine; Thompson, Rachel; Tuttle, Lora K; Haut, Paul R
2017-09-01
Health care is a high-risk industry. To improve communication about daily events and begin the journey toward a high reliability organization, the Riley Hospital for Children at Indiana University Health implemented a daily safety brief. Various departments in our children's hospital were asked to participate in a daily safety brief, reporting daily events and unexpected outcomes within their scope of responsibility. Participants were surveyed before and after implementation of the safety brief about communication and awareness of events in the hospital. The length of the brief and percentage of departments reporting unexpected outcomes were measured. The analysis of the presurvey and the postsurvey showed a statistically significant improvement in the questions related to the awareness of daily events as well as communication and relationships between departments. The monthly mean length of time for the brief was 15 minutes or less. Unexpected outcomes were reported by 50% of the departments for 8 months. A daily safety brief can be successfully implemented in a children's hospital. Communication between departments and awareness of daily events were improved. Implementation of a daily safety brief is a step toward becoming a high reliability organization.
Validation of the group nuclear safety climate questionnaire.
Navarro, M Felisa Latorre; Gracia Lerín, Francisco J; Tomás, Inés; Peiró Silla, José María
2013-09-01
Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Software Safety Risk in Legacy Safety-Critical Computer Systems
NASA Technical Reports Server (NTRS)
Hill, Janice; Baggs, Rhoda
2007-01-01
Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.
Velonakis, E; Mantas, J; Mavrikakis, I
2006-01-01
The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
Investigation of structural factors of safety for the space shuttle
NASA Technical Reports Server (NTRS)
1972-01-01
A study was made of the factors governing the structural design of the fully reusable space shuttle booster to establish a rational approach to select optimum structural factors of safety. The study included trade studies of structural factors of safety versus booster service life, weight, cost, and reliability. Similar trade studies can be made on other vehicles using the procedures developed. The major structural components of a selected baseline booster were studied in depth, each being examined to determine the fatigue life, safe-life, and fail-safe capabilities of the baseline design. Each component was further examined to determine its reliability and safety requirements, and the change of structural weight with factors of safety. The apparent factors of safety resulting from fatigue, safe-life, proof test, and fail-safe requirements were identified. The feasibility of reduced factors of safety for design loads such as engine thrust, which are well defined, was examined.
A safety-based decision making architecture for autonomous systems
NASA Technical Reports Server (NTRS)
Musto, Joseph C.; Lauderbaugh, L. K.
1991-01-01
Engineering systems designed specifically for space applications often exhibit a high level of autonomy in the control and decision-making architecture. As the level of autonomy increases, more emphasis must be placed on assimilating the safety functions normally executed at the hardware level or by human supervisors into the control architecture of the system. The development of a decision-making structure which utilizes information on system safety is detailed. A quantitative measure of system safety, called the safety self-information, is defined. This measure is analogous to the reliability self-information defined by McInroy and Saridis, but includes weighting of task constraints to provide a measure of both reliability and cost. An example is presented in which the safety self-information is used as a decision criterion in a mobile robot controller. The safety self-information is shown to be consistent with the entropy-based Theory of Intelligent Machines defined by Saridis.
Overview of Risk Mitigation for Safety-Critical Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.
Aviation safety and operation problems research and technology
NASA Technical Reports Server (NTRS)
Enders, J. H.; Strickle, J. W.
1977-01-01
Aircraft operating problems are described for aviation safety. It is shown that as aircraft technology improves, the knowledge and understanding of operating problems must also improve for economics, reliability and safety.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Reliability enhancement of APR + diverse protection system regarding common cause failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Y. G.; Kim, Y. M.; Yim, H. S.
2012-07-01
The Advanced Power Reactor Plus (APR +) nuclear power plant design has been developed on the basis of the APR1400 (Advanced Power Reactor 1400 MWe) to further enhance safety and economics. For the mitigation of Anticipated Transients Without Scram (ATWS) as well as Common Cause Failures (CCF) within the Plant Protection System (PPS) and the Emergency Safety Feature - Component Control System (ESF-CCS), several design improvement features have been implemented for the Diverse Protection System (DPS) of the APR + plant. As compared to the APR1400 DPS design, the APR + DPS has been designed to provide the Safety Injectionmore » Actuation Signal (SIAS) considering a large break LOCA accident concurrent with the CCF. Additionally several design improvement features, such as channel structure with redundant processing modules, and changes of system communication methods and auto-system test methods, are introduced to enhance the functional reliability of the DPS. Therefore, it is expected that the APR + DPS can provide an enhanced safety and reliability regarding possible CCF in the safety-grade I and C systems as well as the DPS itself. (authors)« less
Computer-Aided Reliability Estimation
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.
1986-01-01
CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.
NASA Technical Reports Server (NTRS)
1973-01-01
A study was conducted to determine the configuration and performance of a space tug. Detailed descriptions of the insulation, meteoroid protection, primary structure, and ground support equipment are presented. Technical assessments leading to the concept selection are analyzed. The tug mass properties, reliability, and safety assessments are included.
Overview of NASA Ultracapacitor Technology
NASA Technical Reports Server (NTRS)
Hill, Curtis W.
2017-01-01
NASA needed a lower mass, reliable, and safe medium for energy storage for ground-based and space applications. Existing industry electrochemical systems are limited in weight, charge rate, energy density, reliability, and safety. We chose a ceramic perovskite material for development, due to its high inherent dielectric properties, long history of use in the capacitor industry, and the safety of a solid state material.
Development and Validation of a Safety Climate Scale for Manufacturing Industry
Ghahramani, Abolfazl; Khalkhali, Hamid R.
2015-01-01
Background This paper describes the development of a scale for measuring safety climate. Methods This study was conducted in six manufacturing companies in Iran. The scale developed through conducting a literature review about the safety climate and constructing a question pool. The number of items was reduced to 71 after performing a screening process. Results The result of content validity analysis showed that 59 items had excellent item content validity index (≥ 0.78) and content validity ratio (> 0.38). The exploratory factor analysis resulted in eight safety climate dimensions. The reliability value for the final 45-item scale was 0.96. The result of confirmatory factor analysis showed that the safety climate model is satisfactory. Conclusion This study produced a valid and reliable scale for measuring safety climate in manufacturing companies. PMID:26106508
Reliability history of the Apollo guidance computer
NASA Technical Reports Server (NTRS)
Hall, E. C.
1972-01-01
The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Rasmussen, Martin
2016-06-01
This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: •more » Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.« less
Veselov, E I
2011-01-01
The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.
Wilson, Deborah E.
2011-01-01
The events and aftermath of September 11, 2001, accelerated a search for personnel reliability test measures to identify individuals who could pose a threat to our nation's security and safety. The creation and administration of a behavioral health screen for BSL-4 laboratory workers at the National Institutes of Health represents a pioneering effort to proactively build a BSL-4 safety culture promoting worker cohesiveness, trust, respect, and reliability with a balance of worker privacy and public safety. PMID:21361798
Liao, Joshua M; Etchegaray, Jason M; Williams, S Tyler; Berger, David H; Bell, Sigall K; Thomas, Eric J
2014-02-01
To develop and test the psychometric properties of a survey to measure students' perceptions about patient safety as observed on clinical rotations. In 2012, the authors surveyed 367 graduating fourth-year medical students at three U.S. MD-granting medical schools. They assessed the survey's reliability and construct and concurrent validity. They examined correlations between students' perceptions of organizational cultural factors, organizational patient safety measures, and students' intended safety behaviors. They also calculated percent positive scores for cultural factors. Two hundred twenty-eight students (62%) responded. Analyses identified five cultural factors (teamwork culture, safety culture, error disclosure culture, experiences with professionalism, and comfort expressing professional concerns) that had construct validity, concurrent validity, and good reliability (Cronbach alphas > 0.70). Across schools, percent positive scores for safety culture ranged from 28% (95% confidence interval [CI], 13%-43%) to 64% (30%-98%), while those for teamwork culture ranged from 47% (32%-62%) to 74% (66%-81%). They were low for error disclosure culture (range: 10% [0%-20%] to 27% [20%-35%]), experiences with professionalism (range: 7% [0%-15%] to 23% [16%-30%]), and comfort expressing professional concerns (range: 17% [5%-29%] to 38% [8%-69%]). Each cultural factor correlated positively with perceptions of overall patient safety as observed in clinical rotations (r = 0.37-0.69, P < .05) and at least one safety behavioral intent item. This study provided initial evidence for the survey's reliability and validity and illustrated its applicability for determining whether students' clinical experiences exemplify positive patient safety environments.
Reliability of COPVs Accounting for Margin of Safety on Design Burst
NASA Technical Reports Server (NTRS)
Murthy, Pappu L.N.
2012-01-01
In this paper, the stress rupture reliability of Carbon/Epoxy Composite Overwrapped Pressure Vessels (COPVs) is examined utilizing the classic Phoenix model and accounting for the differences between the design and the actual burst pressure, and the liner contribution effects. Stress rupture life primarily depends upon the fiber stress ratio which is defined as the ratio of stress in fibers at the maximum expected operating pressure to actual delivered fiber strength. The actual delivered fiber strength is calculated using the actual burst pressures of vessels established through burst tests. However, during the design phase the actual burst pressure is generally not known and to estimate the reliability of the vessels calculations are usually performed based upon the design burst pressure only. Since the design burst is lower than the actual burst, this process yields a much higher value for the stress ratio and consequently a conservative estimate for the reliability. Other complications arise due to the fact that the actual burst pressure and the liner contributions have inherent variability and therefore must be treated as random variables in order to compute the stress rupture reliability. Furthermore, the model parameters, which have to be established based on stress rupture tests of subscale vessels or coupons, have significant variability as well due to limited available data and hence must be properly accounted for. In this work an assessment of reliability of COPVs including both parameter uncertainties and physical variability inherent in liner and overwrap material behavior is made and estimates are provided in terms of degree of uncertainty in the actual burst pressure and the liner load sharing.
Advanced Health Management System for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Davidson, Matt; Stephens, John; Rodela, Chris
2006-01-01
Pratt & Whitney Rocketdyne, Inc., in cooperation with NASA-Marshall Space Flight Center (MSFC), has developed a new Advanced Health Management System (AHMS) controller for the Space Shuttle Main Engine (SSME) that will increase the probability of successfully placing the shuttle into the intended orbit and increase the safety of the Space Transportation System (STS) launches. The AHMS is an upgrade o the current Block II engine controller whose primary component is an improved vibration monitoring system called the Real-Time Vibration Monitoring System (RTVMS) that can effectively and reliably monitor the state of the high pressure turbomachinery and provide engine protection through a new synchronous vibration redline which enables engine shutdown if the vibration exceeds predetermined thresholds. The introduction of this system required improvements and modification to the Block II controller such as redesigning the Digital Computer Unit (DCU) memory and the Flight Accelerometer Safety Cut-Off System (FASCOS) circuitry, eliminating the existing memory retention batteries, installation of the Digital Signal Processor (DSP) technology, and installation of a High Speed Serial Interface (HSSI) with accompanying outside world connectors. Test stand hot-fire testing along with lab testing have verified successful implementation and is expected to reduce the probability of catastrophic engine failures during the shuttle ascent phase and improve safely by about 23% according to the Quantitative Risk Assessment System (QRAS), leading to a safer and more reliable SSME.
A Briefing on Metrics and Risks for Autonomous Decision-Making in Aerospace Applications
NASA Technical Reports Server (NTRS)
Frost, Susan; Goebel, Kai Frank; Galvan, Jose Ramon
2012-01-01
Significant technology advances will enable future aerospace systems to safely and reliably make decisions autonomously, or without human interaction. The decision-making may result in actions that enable an aircraft or spacecraft in an off-nominal state or with slightly degraded components to achieve mission performance and safety goals while reducing or avoiding damage to the aircraft or spacecraft. Some key technology enablers for autonomous decision-making include: a continuous state awareness through the maturation of the prognostics health management field, novel sensor development, and the considerable gains made in computation power and data processing bandwidth versus system size. Sophisticated algorithms and physics based models coupled with these technological advances allow reliable assessment of a system, subsystem, or components. Decisions that balance mission objectives and constraints with remaining useful life predictions can be made autonomously to maintain safety requirements, optimal performance, and ensure mission objectives. This autonomous approach to decision-making will come with new risks and benefits, some of which will be examined in this paper. To start, an account of previous work to categorize or quantify autonomy in aerospace systems will be presented. In addition, a survey of perceived risks in autonomous decision-making in the context of piloted aircraft and remotely piloted or completely autonomous unmanned autonomous systems (UAS) will be presented based on interviews that were conducted with individuals from industry, academia, and government.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Elaboration and Validation of the Medication Prescription Safety Checklist 1
Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena
2017-01-01
ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128
Brunner, Alexander; Gühring, Markus; Schmälzle, Traude; Weise, Kuno; Badke, Andreas
2009-01-01
Evaluation of the kyphosis angle in thoracic and lumbar burst fractures is often used to indicate surgical procedures. The kyphosis angle could be measured as vertebral, segmental and local kyphosis according to the method of Cobb. The vertebral, segmental and local kyphosis according to the method of Cobb were measured at 120 lateral X-rays and sagittal computed tomographies of 60 thoracic and 60 lumbar burst fractures by 3 independent observers on 2 separate occasions. Osteoporotic fractures were excluded. The intra- and interobserver reliability of these angles in X-ray and computed tomogram, using the intra class correlation coefficient (ICC) were evaluated. Highest reproducibility showed the segmental kyphosis followed by the vertebral kyphosis. For thoracic fractures segmental kyphosis shows in X-ray “excellent” inter- and intraobserver reliabilities (ICC 0.826, 0.802) and for lumbar fractures “good” to “excellent” inter- and intraobserver reliabilities (ICC = 0.790, 0.803). In computed tomography, the segmental kyphosis showed “excellent” inter- and intraobserver reliabilities (ICC = 0.824, 0.801) for thoracic and “excellent” inter- and intraobserver reliabilities (ICC = 0.874, 0.835) for the lumbar fractures. Regarding both diagnostic work ups (X-ray and computed tomography), significant differences were evaluated in interobserver reliabilities for vertebral kyphosis measured in lumbar fracture X-rays (p = 0.035) and interobserver reliabilities for local kyphosis, measured in thoracic fracture X-rays (p = 0.010). Regarding both fracture localizations (thoracic and lumbar fractures), significant differences could only be evaluated in interobserver reliabilities for the local kyphosis measured in computed tomographies (p = 0.045) and in intraobserver reliabilities for the vertebral kyphosis measured in X-rays (p = 0.024). “Good” to “excellent” inter- and intraobserver reliabilities for vertebral, segmental and local kyphosis in X-ray make these angles to a helpful tool, indicating surgical procedures. For the practical use in lateral X-ray, we emphasize the determination of the segmental kyphosis, because of the highest reproducibility of this angle. “Good” to “excellent” inter- and intraobserver reliabilities for these three angles could also be evaluated in computed tomographies. Therefore, also in computed tomography, the use of these three angles seems to be generally possible. For a direct correlation of the results in lateral X-ray and in computed tomography, further studies should be needed. PMID:19953277
Dynamical Approach Study of Spurious Numerics in Nonlinear Computations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi (Technical Monitor)
2002-01-01
The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.
Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki
2013-01-01
Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Computer-based training for safety: comparing methods with older and younger workers.
Wallen, Erik S; Mulloy, Karen B
2006-01-01
Computer-based safety training is becoming more common and is being delivered to an increasingly aging workforce. Aging results in a number of changes that make it more difficult to learn from certain types of computer-based training. Instructional designs derived from cognitive learning theories may overcome some of these difficulties. Three versions of computer-based respiratory safety training were shown to older and younger workers who then took a high and a low level learning test. Younger workers did better overall. Both older and younger workers did best with the version containing text with pictures and audio narration. Computer-based training with pictures and audio narration may be beneficial for workers over 45 years of age. Computer-based safety training has advantages but workers of different ages may benefit differently. Computer-based safety programs should be designed and selected based on their ability to effectively train older as well as younger learners.
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
Design Strategy for a Formally Verified Reliable Computing Platform
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.
1991-01-01
This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.
High reliability and implications for nursing leaders.
Riley, William
2009-03-01
To review high reliability theory and discuss its implications for the nursing leader. A high reliability organization (HRO) is considered that which has measurable near perfect performance for quality and safety. The author has reviewed the literature, discussed research findings that contribute to improving reliability in health care organizations, and makes five recommendations for how nursing leaders can create high reliability organizations. Health care is not a safe industry and unintended patient harm occurs at epidemic levels. Health care can learn from high reliability theory and practice developed in other high-risk industries. Viewed by HRO standards, unintended patient injury in health care is excessively high and quality is distressingly low. HRO theory and practice can be successfully applied in health care using advanced interdisciplinary teamwork training and deliberate process design techniques. Nursing has a primary leadership function for ensuring patient safety and achieving high quality in health care organizations. Learning HRO theory and methods for achieving high reliability is a foremost opportunity for nursing leaders.
18 CFR 292.308 - Standards for operating reliability.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reliability. 292.308 Section 292.308 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... SMALL POWER PRODUCTION AND COGENERATION Arrangements Between Electric Utilities and Qualifying... may establish reasonable standards to ensure system safety and reliability of interconnected...
Fog-computing concept usage as means to enhance information and control system reliability
NASA Astrophysics Data System (ADS)
Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya
2018-05-01
This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.
The Use Of Computational Human Performance Modeling As Task Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less
NASA Technical Reports Server (NTRS)
1972-01-01
The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.
ERIC Educational Resources Information Center
Ballantine, Joan A.; McCourt Larres, Patricia; Oyelere, Peter
2007-01-01
This study evaluates the reliability of self-assessment as a measure of computer competence. This evaluation is carried out in response to recent research which has employed self-reported ratings as the sole indicator of students' computer competence. To evaluate the reliability of self-assessed computer competence, the scores achieved by students…
NASA Technical Reports Server (NTRS)
1974-01-01
System design and performance of the Skylab Airlock Module and Payload Shroud are presented for the communication and caution and warning systems. Crew station and storage, crew trainers, experiments, ground support equipment, and system support activities are also reviewed. Other areas documented include the reliability and safety programs, test philosophy, engineering project management, and mission operations support.
Technology Overview for Advanced Aircraft Armament System Program.
1981-05-01
availability of methods or systems for improving stores and armament safety. Of particular importance are aspects of safety involving hazards analysis ...flutter virtually insensitive to inertia and center-of- gravity location of store - Simplifies and reduces analysis and testing required to flutter- clear...status. Nearly every existing reliability analysis and discipline that prom- ised a positive return on reliability performance was drawn out, dusted
Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting
Kristensen, Solvejg; Sabroe, Svend; Bartels, Paul; Mainz, Jan; Christensen, Karl Bang
2015-01-01
Purpose Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee’s perceptions of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ). The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data. Materials and methods The SAQ was translated and adapted for the Danish setting (SAQ-DK). The SAQ-DK was distributed to 1,263 staff members from 31 in- and outpatient units (clinical areas) across five somatic and one psychiatric hospitals through meeting administration, hand delivery, and mailing. Construct validity and reliability were tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach’s alpha (α), and item and subscale scores. Results Participation was 73.2% (N=925) of invited health care workers. Goodness-of-fit indices from the confirmatory factor analysis showed: c2=1496.76, P<0.001, CFI 0.901, RMSEA (90% CI) 0.053 (0.050–0056), Probability RMSEA (p close)=0.057. Inter-scale correlations between the factors showed moderate-to-high correlations. The scale stress recognition had significant negative correlations with each of the other scales. Questionnaire reliability was high, (α=0.89), and scale reliability ranged from α=0.70 to α=0.86 for the six scales. Proportions of participants with a positive attitude to each of the six SAQ scales did not differ between the somatic and psychiatric health care staff. Substantial variability at the unit level in all six scale mean scores was found within the somatic and the psychiatric samples. Conclusion SAQ-DK showed good construct validity and internal consistency reliability. SAQ-DK is potentially a useful tool for evaluating perceptions of patient safety culture in Danish hospitals. PMID:25674015
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
Stoyanova, Rumyana; Dimova, Rositsa; Tarnovska, Miglena; Boeva, Tatyana
2018-05-20
Patient safety (PS) is one of the essential elements of health care quality and a priority of healthcare systems in most countries. Thus the creation of validated instruments and the implementation of systems that measure patient safety are considered to be of great importance worldwide. The present paper aims to illustrate the process of linguistic validation, cross-cultural verification and adaptation of the Bulgarian version of the Hospital Survey on Patient Safety Culture (B-HSOPSC) and its test-retest reliability. The study design is cross-sectional. The HSOPSC questionnaire consists of 42 questions, grouped in 12 different subscales that measure patient safety culture. Internal con-sistency was assessed using Cronbach's alpha. The Wilcoxon signed-rank test and the split-half method were used; the Spear-man-Brown coefficient was calculated. The overall Cronbach's alpha for B-HSOPSC is 0.918. Subscales 7 Staffing and 12 Overall perceptions of safety had the lowest coefficients. The high reliability of the instrument was confirmed by the Split-half method (0.97) and ICC-coefficient (0.95). The lowest values of Spearmen-Broun coefficients were found in items A13 and A14. The study offers an analysis of the results of the linguistic validation of the B-HSOPSC and its test-retest reliability. The psychometric characteristics of the questions revealed good validity and reliability, except two questions. In the future, the instrument will be administered to the target population in the main study so that the psychometric properties of the instrument can be verified.
Development of a Home Food Safety Questionnaire Based on the PRECEDE Model: Targeting Iranian Women.
Esfarjani, Fatemeh; Hosseini, Hedayat; Mohammadi-Nasrabadi, Fatemeh; Abadi, Alireza; Roustaee, Roshanak; Alikhanian, Haleh; Khalafi, Marjan; Kiaee, Mohammad Farhad; Khaksar, Ramin
2016-12-01
Food safety is an essential public health issue for all countries. This study was the first attempt to design and develop a home food safety questionnaire (HFSQ), in the conceptual framework of the PRECEDE (predisposing, reinforcing, and enabling constructs in educational diagnosis and evaluation) model, and to assess its validity and reliability. The HFSQ was developed by reviewing electronic databases and 12 focus group discussions with 96 women volunteers. Ten panel members reviewed the questionnaire, and the content validity ratio and content validity index were computed. Twenty women completed the HFSQ, and face validity was assessed. Women who were responsible for food handling in their households (n =320) were selected randomly from 10 health centers and completed the HFSQ based on the PRECEDE model. To examine the construct validity, a principal components factor analysis with varimax rotation was used. Internal consistency was determined with Cronbach's α. Reproducibility was checked by Kendall's τ after 4 weeks with 30 women. The developed HSFQ was considered acceptable with a content validity index of 0.88. Face validity revealed that 95% of the participants understood the questions and found them easy to answer, and 90% confirmed the appearance of the HFSQ and declared the layout acceptable. Principal component factor analysis revealed that the HFSQ could explain 33.7, 55.3, 34.8, and 60.0% of the total variance of the predisposing, reinforcing, practice, and enabling components, respectively. Cronbach's α was acceptable at 0.73. For Kendall's τ c , r = 0.89, with a 95% confidence interval of 0.85 to 0.93. The HFSQ developed based on the PRECEDE model met the standards of acceptable reliability and validity, which can be generalized to a wider population. These results can provide information for the development of effective communication strategies to promote home food safety.
Cork, Randy D.; Detmer, William M.; Friedman, Charles P.
1998-01-01
This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349
Assessment of Safety Standards for Automotive Electronic Control Systems
DOT National Transportation Integrated Search
2016-06-01
This report summarizes the results of a study that assessed and compared six industry and government safety standards relevant to the safety and reliability of automotive electronic control systems. These standards include ISO 26262 (Road Vehicles - ...
Overview of Design, Lifecycle, and Safety for Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This document describes the need and justification for the development of a design guide for safety-relevant computer-based systems. This document also makes a contribution toward the design guide by presenting an overview of computer-based systems design, lifecycle, and safety.
Singer, Sara; Meterko, Mark; Baker, Laurence; Gaba, David; Falwell, Alyson; Rosen, Amy
2007-10-01
To describe the development of an instrument for assessing workforce perceptions of hospital safety culture and to assess its reliability and validity. Primary data collected between March 2004 and May 2005. Personnel from 105 U.S. hospitals completed a 38-item paper and pencil survey. We received 21,496 completed questionnaires, representing a 51 percent response rate. Based on review of existing safety climate surveys, we developed a list of key topics pertinent to maintaining a culture of safety in high-reliability organizations. We developed a draft questionnaire to address these topics and pilot tested it in four preliminary studies of hospital personnel. We modified the questionnaire based on experience and respondent feedback, and distributed the revised version to 42,249 hospital workers. We randomly divided respondents into derivation and validation samples. We applied exploratory factor analysis to responses in the derivation sample. We used those results to create scales in the validation sample, which we subjected to multitrait analysis (MTA). We identified nine constructs, three organizational factors, two unit factors, three individual factors, and one additional factor. Constructs demonstrated substantial convergent and discriminant validity in the MTA. Cronbach's alpha coefficients ranged from 0.50 to 0.89. It is possible to measure key salient features of hospital safety climate using a valid and reliable 38-item survey and appropriate hospital sample sizes. This instrument may be used in further studies to better understand the impact of safety climate on patient safety outcomes.
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldemir, Tunc; Denning, Richard; Catalyurek, Umit
Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, suchmore » as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.« less
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin
2017-01-01
The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
HDMR methods to assess reliability in slope stability analyses
NASA Astrophysics Data System (ADS)
Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna
2014-05-01
Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky-soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)
2001-01-01
This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in distribution from Gaussian to Weibull for the centrifugal load. The sensitivity factors determined to be most dominant were the centrifugal loading and the initial strength of the material. These two sensitivity factors were influenced most by a change in distribution type from Gaussian to Weibull. The education portion of this report describes short-term and long-term educational objectives. Such objectives serve to integrate research and education components of this project resulting in opportunities for ethnic minority students, principally Hispanic. The primary vehicle to facilitate such integration was the teaching of two probabilistic finite element method courses to undergraduate engineering students in the summers of 1998 and 1999.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
Haugen, Arvid S; Søfteland, Eirik; Eide, Geir E; Nortvedt, Monica W; Aase, Karina; Harthug, Stig
2010-09-22
How hospital health care personnel perceive safety climate has been assessed in several countries by using the Hospital Survey on Patient Safety (HSOPS). Few studies have examined safety climate factors in surgical departments per se. This study examined the psychometric properties of a Norwegian translation of the HSOPS and also compared safety climate factors from a surgical setting to hospitals in the United States, the Netherlands and Norway. This survey included 575 surgical personnel in Haukeland University Hospital in Bergen, an 1100-bed tertiary hospital in western Norway: surgeons, operating theatre nurses, anaesthesiologists, nurse anaesthetists and ancillary personnel. Of these, 358 returned the HSOPS, resulting in a 62% response rate. We used factor analysis to examine the applicability of the HSOPS factor structure in operating theatre settings. We also performed psychometric analysis for internal consistency and construct validity. In addition, we compared the percent of average positive responds of the patient safety climate factors with results of the US HSOPS 2010 comparative data base report. The professions differed in their perception of patient safety climate, with anaesthesia personnel having the highest mean scores. Factor analysis using the original 12-factor model of the HSOPS resulted in low reliability scores (r = 0.6) for two factors: "adequate staffing" and "organizational learning and continuous improvement". For the remaining factors, reliability was ≥ 0.7. Reliability scores improved to r = 0.8 by combining the factors "organizational learning and continuous improvement" and "feedback and communication about error" into one six-item factor, supporting an 11-factor model. The inter-item correlations were found satisfactory. The psychometric properties of the questionnaire need further investigations to be regarded as reliable in surgical environments. The operating theatre personnel perceived their hospital's patient safety climate far more negatively than the health care personnel in hospitals in the United States and with perceptions more comparable to those of health care personnel in hospitals in the Netherlands. In fact, the surgical personnel in our hospital may perceive that patient safety climate is less focused in our hospital, at least compared with the results from hospitals in the United States.
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
Phase I of the Near Term Hybrid Passenger Vehicle Development Program. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-10-01
The results of Phase I of the Near-Term Hybrid Vehicle Program are summarized. This phase of the program ws a study leading to the preliminary design of a 5-passenger hybrid vehicle utilizing two energy sources (electricity and gasoline/diesel fuel) to minimize petroleum usage on a fleet basis. This report presents the following: overall summary of the Phase I activity; summary of the individual tasks; summary of the hybrid vehicle design; summary of the alternative design options; summary of the computer simulations; summary of the economic analysis; summary of the maintenance and reliability considerations; summary of the design for crash safety;more » and bibliography.« less
The communicative construction of safety in wildland firefighting
Jody Jahn; Linda L. Putnam; Anne E. Black
2012-01-01
This document is a summary of a mixed methods dissertation that examined the communicative construction of safety in wildland firefighting. For the dissertation, I used a two-study mixed methods approach, examining the communicative accomplishment of safety from two perspectives: high reliability organizing (Weick, Sutcliffe, & Obstfeld, 1999), and safety climate (...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Diego Mandelli; Ronald L. Boring
2015-07-01
The United States Department of Energy is sponsoring the Light Water Reactor Sustainability program, which has the overall objective of supporting the near-term and the extended operation of commercial nuclear power plants. One key research and development (R&D) area in this program is the Risk-Informed Safety Margin Characterization pathway, which combines probabilistic risk simulation with thermohydraulic simulation codes to define and manage safety margins. The R&D efforts to date, however, have not included robust simulations of human operators, and how the reliability of human performance or lack thereof (i.e., human errors) can affect risk-margins and plant performance. This paper describesmore » current and planned research efforts to address the absence of robust human reliability simulations and thereby increase the fidelity of simulated accident scenarios.« less
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
10 CFR 600.342 - Retention and access requirements for records.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...
10 CFR 600.342 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...
Design of a modular digital computer system
NASA Technical Reports Server (NTRS)
1973-01-01
A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., ``Verification, Validation, Reviews, and Audits for Digital Computer Software used in Safety Systems of Nuclear... NRC regulations promoting the development of, and compliance with, software verification and...
Cropper, Douglas P; Harb, Nidal H; Said, Patricia A; Lemke, Jon H; Shammas, Nicolas W
2018-04-01
We hypothesize that implementation of a safety program based on high reliability organization principles will reduce serious safety events (SSE). The safety program focused on 7 essential elements: (a) safety rounding, (b) safety oversight teams, (c) safety huddles, (d) safety coaches, (e) good catches/safety heroes, (f) safety education, and (g) red rule. An educational curriculum was implemented focusing on changing high-risk behaviors and implementing critical safety policies. All unusual occurrences were captured in the Midas system and investigated by risk specialists, the safety officer, and the chief medical officer. A multidepartmental committee evaluated these events, and a root cause analysis (RCA) was performed. Events were tabulated and serious safety event (SSE) recorded and plotted over time. Safety success stories (SSSs) were also evaluated over time. A steady drop in SSEs was seen over 9 years. Also a rise in SSSs was evident, reflecting on staff engagement in the program. The parallel change in SSEs, SSSs, and the implementation of various safety interventions highly suggest that the program was successful in achieving its goals. A safety program based on high-reliability organization principles and made a core value of the institution can have a significant positive impact on reducing SSEs. © 2018 American Society for Healthcare Risk Management of the American Hospital Association.
Stern, RJ; Fernandez, A; Jacobs, EA; Neilands, TB; Weech-Maldonado, R; Quan, J; Carle, A; Seligman, HK
2012-01-01
Background Providing culturally competent care shows promise as a mechanism to reduce healthcare inequalities. Until the recent development of the CAHPS Cultural Competency Item Set (CAHPS-CC), no measures capturing patient-level experiences with culturally competent care have been suitable for broad-scale administration. Methods We performed confirmatory factor analysis and internal consistency reliability analysis of CAHPS-CC among patients with type 2 diabetes (n=600) receiving primary care in safety-net clinics. CAHPS-CC domains were also correlated with global physician ratings. Results A 7-factor model demonstrated satisfactory fit (χ2(231)=484.34, p<.0001) with significant factor loadings at p<.05. Three domains showed excellent reliability – Doctor Communication- Positive Behaviors (α=.82), Trust (α=.77), and Doctor Communication- Health Promotion (α=.72). Four domains showed inadequate reliability either among Spanish speakers or overall (overall reliabilities listed): Doctor Communication- Negative Behaviors (α=.54), Equitable Treatment (α=.69), Doctor Communication- Alternative Medicine (α=.52), and Shared Decision-Making (α=.51). CAHPS-CC domains were positively and significantly correlated with global physician rating. Conclusions Select CAHPS-CC domains are suitable for broad-scale administration among safety-net patients. Those domains may be used to target quality-improvement efforts focused on providing culturally competent care in safety-net settings. PMID:22895231
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
Cairnduff, Victoria; Dean, Moira; Koidis, Anastasios
2016-09-01
Food preparation and storage behaviors in the home deviating from the "best practice" food safety recommendations may result in foodborne illnesses. Currently, there are limited tools available to fully evaluate the consumer knowledge, perceptions, and behavior in the area of refrigerator safety. The current study aimed to develop a valid and reliable tool in the form of a questionnaire, the Consumer Refrigerator Safety Questionnaire (CRSQ), for assessing systematically all these aspects. Items relating to refrigerator safety knowledge (n =17), perceptions (n =46), and reported behavior (n =30) were developed and pilot tested by an expert reference group and various consumer groups to assess face and content validity (n =20), item difficulty and consistency (n =55), and construct validity (n =23). The findings showed that the CRSQ has acceptable face and content validity with acceptable levels of item difficulty. Item consistency was observed for 12 of 15 in refrigerator safety knowledge. Further, all 5 of the subscales of consumer perceptions of refrigerator safety practices relating to risk of developing foodborne disease showed acceptable internal consistency (Cronbach's α value > 0.8). Construct validity of the CRSQ was shown to be very good (P = 0.022). The CRSQ exhibited acceptable test-retest reliability at 14 days with the majority of knowledge items (93.3%) and reported behavior items (96.4%) having correlation coefficients of greater than 0.70. Overall, the CRSQ was deemed valid and reliable in assessing refrigerator safety knowledge and behavior; therefore, it has the potential for future use in identifying groups of individuals at increased risk of deviating from recommended refrigerator safety practices, as well as the assessment of refrigerator safety knowledge and behavior for use before and after an intervention.
Reliability and concurrent validity of the computer workstation checklist.
Baker, Nancy A; Livengood, Heather; Jacobs, Karen
2013-01-01
Self-report checklists are used to assess computer workstation set up, typically by workers not trained in ergonomic assessment or checklist interpretation.Though many checklists exist, few have been evaluated for reliability and validity. This study examined reliability and validity of the Computer Workstation Checklist (CWC) to identify mismatches between workers' self-reported workstation problems. The CWC was completed at baseline and at 1 month to establish reliability. Validity was determined with CWC baseline data compared to an onsite workstation evaluation conducted by an expert in computer workstation assessment. Reliability ranged from fair to near perfect (prevalence-adjusted bias-adjusted kappa, 0.38-0.93); items with the strongest agreement were related to the input device, monitor, computer table, and document holder. The CWC had greater specificity (11 of 16 items) than sensitivity (3 of 16 items). The positive predictive value was greater than the negative predictive value for all questions. The CWC has strong reliability. Sensitivity and specificity suggested workers often indicated no problems with workstation setup when problems existed. The evidence suggests that while the CWC may not be valid when used alone, it may be a suitable adjunct to an ergonomic assessment completed by professionals.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Examples of Nonconservatism in the CARE 3 Program
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1988-01-01
This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
The safety and reliability of the S and A mechanism designed for the NASA/LSPE program
NASA Technical Reports Server (NTRS)
Montesi, L. J.
1973-01-01
Under contract to the Manned Spacecraft Center, NASA/Houston, NOL developed a number of explosive charges for use in studying the surface of the moon during Apollo 17 activities. The charges were part of the Lunar Seismic Profiling Experiment (LSPE). When the Safety and Arming Device used in the previous ALSEP experiments was found unsuitable for use with the new explosive packages, NOL also designed the Safety and Arming Mechanism, and the safety and reliability tests conducted are described. The results of the test program indicate that the detonation transfer probability between the armed explosive components exceeds 0.9999, and is less than 0.0001 when the explosive components are in the safe position.
Evolution of Safety Analysis to Support New Exploration Missions
NASA Technical Reports Server (NTRS)
Thrasher, Chard W.
2008-01-01
NASA is currently developing the Ares I launch vehicle as a key component of the Constellation program which will provide safe and reliable transportation to the International Space Station, back to the moon, and later to Mars. The risks and costs of the Ares I must be significantly lowered, as compared to other manned launch vehicles, to enable the continuation of space exploration. It is essential that safety be significantly improved, and cost-effectively incorporated into the design process. This paper justifies early and effective safety analysis of complex space systems. Interactions and dependences between design, logistics, modeling, reliability, and safety engineers will be discussed to illustrate methods to lower cost, reduce design cycles and lessen the likelihood of catastrophic events.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, D.; Brunett, A.; Passerini, S.
Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less
10 CFR 712.35 - Director, Office of Health and Safety.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Director, Office of Health and Safety. 712.35 Section 712.35 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Medical Standards § 712.35 Director, Office of Health and Safety. The Director, Office of Health and Safety or his or her designee must: (a...
77 FR 57949 - Federal Acquisition Regulation; Positive Law Codification of Title 41
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-18
..., environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the... Work Hours and 40 U.S.C. chapter 37 Contract Work Hours Safety Standards Act. and Safety Standards... at improving performance, reliability, quality, safety, and life-cycle costs 41 U.S.C. 1711). For use...
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Sun, Yi; Arning, Martin; Bochmann, Frank; Börger, Jutta; Heitmann, Thomas
2018-06-01
The Occupational Safety and Health Monitoring and Assessment Tool (OSH-MAT) is a practical instrument that is currently used in the German woodworking and metalworking industries to monitor safety conditions at workplaces. The 12-item scoring system has three subscales rating technical, organizational, and personnel-related conditions in a company. Each item has a rating value ranging from 1 to 9, with higher values indicating higher standard of safety conditions. The reliability of this instrument was evaluated in a cross-sectional survey among 128 companies and its validity among 30,514 companies. The inter-rater reliability of the instrument was examined independently and simultaneously by two well-trained safety engineers. Agreement between the double ratings was quantified by the intraclass correlation coefficient and absolute agreement of the rating values. The content validity of the OSH-MAT was evaluated by quantifying the association between OSH-MAT values and 5-year average injury rates by Poisson regression analysis adjusted for the size of the companies and industrial sectors. The construct validity of OSH-MAT was examined by principle component factor analysis. Our analysis indicated good to very good inter-rater reliability (intraclass correlation coefficient = 0.64-0.74) of OSH-MAT values with an absolute agreement of between 72% and 81%. Factor analysis identified three component subscales that met exactly the structure theory of this instrument. The Poisson regression analysis demonstrated a statistically significant exposure-response relationship between OSH-MAT values and the 5-year average injury rates. These analyses indicate that OSH-MAT is a valid and reliable instrument that can be used effectively to monitor safety conditions at workplaces.
Mikkelsen, Kim Lyngby; Thommesen, Jacob; Andersen, Henning Boje
2013-01-01
Objectives Validation of a Danish patient safety incident classification adapted from the World Health Organizaton's International Classification for Patient Safety (ICPS-WHO). Design Thirty-three hospital safety management experts classified 58 safety incident cases selected to represent all types and subtypes of the Danish adaptation of the ICPS (ICPS-DK). Outcome Measures Two measures of inter-rater agreement: kappa and intra-class correlation (ICC). Results An average number of incident types used per case per rater was 2.5. The mean ICC was 0.521 (range: 0.199–0.809) and the mean kappa was 0.513 (range: 0.193–0.804). Kappa and ICC showed high correlation (r = 0.99). An inverse correlation was found between the prevalence of type and inter-rater reliability. Results are discussed according to four factors known to determine the inter-rater agreement: skill and motivation of raters; clarity of case descriptions; clarity of the operational definitions of the types and the instructions guiding the coding process; adequacy of the underlying classification scheme. Conclusions The incident types of the ICPS-DK are adequate, exhaustive and well suited for classifying and structuring incident reports. With a mean kappa a little above 0.5 the inter-rater agreement of the classification system is considered ‘fair’ to ‘good’. The wide variation in the inter-rater reliability and low reliability and poor discrimination among the highly prevalent incident types suggest that for these types, precisely defined incident sub-types may be preferred. This evaluation of the reliability and usability of WHO's ICPS should be useful for healthcare administrations that consider or are in the process of adapting the ICPS. PMID:23287641
76 FR 40943 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
..., Revision 3, ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants.'' FOR FURTHER..., ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants,'' was issued with a temporary... Fuel Reprocessing Plants,'' to 10 CFR part 50 with regard to the use of computers in safety systems of...
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Assessment of physical server reliability in multi cloud computing system
NASA Astrophysics Data System (ADS)
Kalyani, B. J. D.; Rao, Kolasani Ramchand H.
2018-04-01
Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia
2015-04-26
Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less
An Independent Evaluation of the FMEA/CIL Hazard Analysis Alternative Study
NASA Technical Reports Server (NTRS)
Ray, Paul S.
1996-01-01
The present instruments of safety and reliability risk control for a majority of the National Aeronautics and Space Administration (NASA) programs/projects consist of Failure Mode and Effects Analysis (FMEA), Hazard Analysis (HA), Critical Items List (CIL), and Hazard Report (HR). This extensive analytical approach was introduced in the early 1970's and was implemented for the Space Shuttle Program by NHB 5300.4 (1D-2. Since the Challenger accident in 1986, the process has been expanded considerably and resulted in introduction of similar and/or duplicated activities in the safety/reliability risk analysis. A study initiated in 1995, to search for an alternative to the current FMEA/CIL Hazard Analysis methodology generated a proposed method on April 30, 1996. The objective of this Summer Faculty Study was to participate in and conduct an independent evaluation of the proposed alternative to simplify the present safety and reliability risk control procedure.
Tamuz, Michal; Harrison, Michael I
2006-01-01
Objective To identify the distinctive contributions of high-reliability theory (HRT) and normal accident theory (NAT) as frameworks for examining five patient safety practices. Data Sources/Study Setting We reviewed and drew examples from studies of organization theory and health services research. Study Design After highlighting key differences between HRT and NAT, we applied the frames to five popular safety practices: double-checking medications, crew resource management (CRM), computerized physician order entry (CPOE), incident reporting, and root cause analysis (RCA). Principal Findings HRT highlights how double checking, which is designed to prevent errors, can undermine mindfulness of risk. NAT emphasizes that social redundancy can diffuse and reduce responsibility for locating mistakes. CRM promotes high reliability organizations by fostering deference to expertise, rather than rank. However, HRT also suggests that effective CRM depends on fundamental changes in organizational culture. NAT directs attention to an underinvestigated feature of CPOE: it tightens the coupling of the medication ordering process, and tight coupling increases the chances of a rapid and hard-to-contain spread of infrequent, but harmful errors. Conclusions Each frame can make a valuable contribution to improving patient safety. By applying the HRT and NAT frames, health care researchers and administrators can identify health care settings in which new and existing patient safety interventions are likely to be effective. Furthermore, they can learn how to improve patient safety, not only from analyzing mishaps, but also by studying the organizational consequences of implementing safety measures. PMID:16898984
Development of a multilevel health and safety climate survey tool within a mining setting.
Parker, Anthony W; Tones, Megan J; Ritchie, Gabrielle E
2017-09-01
This study aimed to design, implement and evaluate the reliability and validity of a multifactorial and multilevel health and safety climate survey (HSCS) tool with utility in the Australian mining setting. An 84-item questionnaire was developed and pilot tested on a sample of 302 Australian miners across two open cut sites. A 67-item, 10 factor solution was obtained via exploratory factor analysis (EFA) representing prioritization and attitudes to health and safety across multiple domains and organizational levels. Each factor demonstrated a high level of internal reliability, and a series of ANOVAs determined a high level of consistency in responses across the workforce, and generally irrespective of age, experience or job category. Participants tended to hold favorable views of occupational health and safety (OH&S) climate at the management, supervisor, workgroup and individual level. The survey tool demonstrated reliability and validity for use within an open cut Australian mining setting and supports a multilevel, industry specific approach to OH&S climate. Findings suggested a need for mining companies to maintain high OH&S standards to minimize risks to employee health and safety. Future research is required to determine the ability of this measure to predict OH&S outcomes and its utility within other mine settings. As this tool integrates health and safety, it may have benefits for assessment, monitoring and evaluation in the industry, and improving the understanding of how health and safety climate interact at multiple levels to influence OH&S outcomes. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Newham, Rosemary; Bennie, Marion; Maxwell, David; Watson, Anne; de Wet, Carl; Bowie, Paul
2014-12-01
A positive and strong safety culture underpins effective learning from patient safety incidents in health care, including the community pharmacy (CP) setting. To build this culture, perceptions of safety climate must be measured with context-specific and reliable instruments. No pre-existing instruments were specifically designed or suitable for CP within Scotland. We therefore aimed to develop a psychometrically sound instrument to measure perceptions of safety climate within Scottish CPs. The first stage, development of a preliminary instrument, comprised three steps: (i) a literature review; (ii) focus group feedback; and (iii) content validation. The second stage, psychometric testing, consisted of three further steps: (iv) a pilot survey; (v) a survey of all CP staff within a single health board in NHS Scotland; and (vi) application of statistical methods, including principal components analysis and calculation of Cronbach's reliability coefficients, to derive the final instrument. The preliminary questionnaire was developed through a process of literature review and feedback. This questionnaire was completed by staff in 50 CPs from the 131 (38%) sampled. 250 completed questionnaires were suitable for analysis. Psychometric evaluation resulted in a 30-item instrument with five positively correlated safety climate factors: leadership, teamwork, safety systems, communication and working conditions. Reliability coefficients were satisfactory for the safety climate factors (α > 0.7) and overall (α = 0.93). The robust nature of the technical design and testing process has resulted in the development of an instrument with sufficient psychometric properties, which can be implemented in the community pharmacy setting in NHS Scotland. © 2014 John Wiley & Sons, Ltd.
Safety Climate Survey: reliability of results from a multicenter ICU survey.
Kho, M E; Carbone, J M; Lucas, J; Cook, D J
2005-08-01
It is important to understand the clinical properties of instruments used to measure patient safety before they are used in the setting of an intensive care unit (ICU). The Safety Climate Survey (SCSu), an instrument endorsed by the Institute for Healthcare Improvement, the Safety Culture Scale (SCSc), and the Safety Climate Mean (SCM), a subset of seven items from the SCSu, were administered in four Canadian university affiliated ICUs. All staff including nurses, allied healthcare professionals, non-clinical staff, intensivists, and managers were invited to participate in the cross sectional survey. The response rate was 74% (313/426). The internal consistency of the SCSu and SCSc was 0.86 and 0.80, respectively, while the SCM performed poorly at 0.51. Because of poor internal consistency, no further analysis of the SCM was therefore performed. Test-retest reliability of the SCSu and SCSc was 0.92. Out of a maximum score of 5, the mean (SD) scores of the SCSu and SCSc were 3.4 (0.6) and 3.4 (0.7), respectively. No differences were noted between the three medical-surgical and one cardiovascular ICU. Managers perceived a significantly more positive safety climate than other staff, as measured by the SCSu and SCSc. These results need to be interpreted cautiously because of the small number of management participants. Of the three instruments, the SCSu and SCSc appear to be measuring one construct and are sufficiently reliable. Future research should examine the properties of patient safety instruments in other ICUs, including responsiveness to change, to ensure that they are valid outcome measures for patient safety initiatives.
Singer, Sara; Meterko, Mark; Baker, Laurence; Gaba, David; Falwell, Alyson; Rosen, Amy
2007-01-01
Objective To describe the development of an instrument for assessing workforce perceptions of hospital safety culture and to assess its reliability and validity. Data Sources/Study Setting Primary data collected between March 2004 and May 2005. Personnel from 105 U.S. hospitals completed a 38-item paper and pencil survey. We received 21,496 completed questionnaires, representing a 51 percent response rate. Study Design Based on review of existing safety climate surveys, we developed a list of key topics pertinent to maintaining a culture of safety in high-reliability organizations. We developed a draft questionnaire to address these topics and pilot tested it in four preliminary studies of hospital personnel. We modified the questionnaire based on experience and respondent feedback, and distributed the revised version to 42,249 hospital workers. Data Collection We randomly divided respondents into derivation and validation samples. We applied exploratory factor analysis to responses in the derivation sample. We used those results to create scales in the validation sample, which we subjected to multitrait analysis (MTA). Principal Findings We identified nine constructs, three organizational factors, two unit factors, three individual factors, and one additional factor. Constructs demonstrated substantial convergent and discriminant validity in the MTA. Cronbach's α coefficients ranged from 0.50 to 0.89. Conclusions It is possible to measure key salient features of hospital safety climate using a valid and reliable 38-item survey and appropriate hospital sample sizes. This instrument may be used in further studies to better understand the impact of safety climate on patient safety outcomes. PMID:17850530
Summary of NASA Aerospace Flight Battery Systems Program activities
NASA Technical Reports Server (NTRS)
Manzo, Michelle; Odonnell, Patricia
1994-01-01
A summary of NASA Aerospace Flight Battery Systems Program Activities is presented. The NASA Aerospace Flight Battery Systems Program represents a unified NASA wide effort with the overall objective of providing NASA with the policy and posture which will increase the safety, performance, and reliability of space power systems. The specific objectives of the program are to: enhance cell/battery safety and reliability; maintain current battery technology; increase fundamental understanding of primary and secondary cells; provide a means to bring forth advanced technology for flight use; assist flight programs in minimizing battery technology related flight risks; and ensure that safe, reliable batteries are available for NASA's future missions.
The Role of Probabilistic Design Analysis Methods in Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2016-01-01
For the last several years, NASA and its contractors have been working together to build space launch systems to commercialize space. Developing commercial affordable and safe launch systems becomes very important and requires a paradigm shift. This paradigm shift enforces the need for an integrated systems engineering environment where cost, safety, reliability, and performance need to be considered to optimize the launch system design. In such an environment, rule based and deterministic engineering design practices alone may not be sufficient to optimize margins and fault tolerance to reduce cost. As a result, introduction of Probabilistic Design Analysis (PDA) methods to support the current deterministic engineering design practices becomes a necessity to reduce cost without compromising reliability and safety. This paper discusses the importance of PDA methods in NASA's new commercial environment, their applications, and the key role they can play in designing reliable, safe, and affordable launch systems. More specifically, this paper discusses: 1) The involvement of NASA in PDA 2) Why PDA is needed 3) A PDA model structure 4) A PDA example application 5) PDA link to safety and affordability.
14 CFR 171.29 - Installation requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., applicable electric and safety codes, and FCC licensing requirements. (b) The facility must have a reliable... be the ground-air communications required by paragraph (d)(1) of this section and reliable... paragraphs (d) (1) and (2) of this section may be reduced to reliable communications (at least a landline...
14 CFR 171.29 - Installation requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., applicable electric and safety codes, and FCC licensing requirements. (b) The facility must have a reliable... be the ground-air communications required by paragraph (d)(1) of this section and reliable... paragraphs (d) (1) and (2) of this section may be reduced to reliable communications (at least a landline...
Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K
2011-04-01
Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual measurements, while interobserver APAs ranged from 91% to 96% for QMA versus 57% to 63% for digitized manual measurements. The use of QMA software substantially improved the reliability of lumbar intervertebral measurements and the classification of instability based on flexion-extension radiographs.
The development and psychometric evaluation of a safety climate measure for primary care.
de Wet, C; Spence, W; Mash, R; Johnson, P; Bowie, P
2010-12-01
Building a safety culture is an important part of improving patient care. Measuring perceptions of safety climate among healthcare teams and organisations is a key element of this process. Existing measurement instruments are largely developed for secondary care settings in North America and many lack adequate psychometric testing. Our aim was to develop and test an instrument to measure perceptions of safety climate among primary care teams in National Health Service for Scotland. Questionnaire development was facilitated through a steering group, literature review, semistructured interviews with primary care team members, a modified Delphi and completion of a content validity index by experts. A cross-sectional postal survey utilising the questionnaire was undertaken in a random sample of west of Scotland general practices to facilitate psychometric evaluation. Statistical methods, including exploratory and confirmatory factor analysis, and Cronbach and Raykov reliability coefficients were conducted. Of the 667 primary care team members based in 49 general practices surveyed, 563 returned completed questionnaires (84.4%). Psychometric evaluation resulted in the development of a 30-item questionnaire with five safety climate factors: leadership, teamwork, communication, workload and safety systems. Retained items have strong factor loadings to only one factor. Reliability coefficients was satisfactory (α = 0.94 and ρ = 0.93). This study is the first stage in the development of an appropriately valid and reliable safety climate measure for primary care. Measuring safety climate perceptions has the potential to help primary care organisations and teams focus attention on safety-related issues and target improvement through educational interventions. Further research is required to explore acceptability and feasibility issues for primary care teams and the potential for organisational benchmarking.
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
NASA Astrophysics Data System (ADS)
Vasiliev, Bogdan U.
2017-01-01
The stable development of the European countries depends on a reliable and efficient operation of the gas transportation system (GTS). With high reliability of GTS it is necessary to ensure its industrial and environmental safety. In this article the major factors influencing on an industrial and ecological safety of GTS are analyzed, sources of GTS safety decreasing is revealed, measures for providing safety are proposed. The article shows that use of gas-turbine engines of gas-compressor units (GCU) results in the following phenomena: emissions of harmful substances in the atmosphere; pollution by toxic waste; harmful noise and vibration; thermal impact on environment; decrease in energy efficiency. It is shown that for the radical problem resolution of an industrial and ecological safety of gas-transmission system it is reasonable to use gas-compressor units driven by electric motors. Their advantages are shown. Perspective technologies of these units and experience of their use in Europe and the USA are given in this article.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Indicators of School Crime and Safety: 2012. NCES 2013-036/NCJ 241446
ERIC Educational Resources Information Center
Robers, Simone; Kemp, Jana; Truman, Jennifer
2013-01-01
Establishing reliable indicators of the current state of school crime and safety across the nation and regularly updating and monitoring these indicators is important in ensuring the safety of our nation's students. This is the aim of "Indicators of School Crime and Safety." This report is the fifteenth in a series of annual publications…
General Aviation Aircraft Reliability Study
NASA Technical Reports Server (NTRS)
Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)
2001-01-01
This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Mertens, H W; Milburn, N J; Collins, W E
2000-12-01
Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.
Thermal and fluid simulation of the environment under the dashboard, compared with measurement data
NASA Astrophysics Data System (ADS)
Popescu, C. S.; Sirbu, G. M.; Nita, I. C.
2017-10-01
The development of vehicles during the last decade is related to the evolution of electronic systems added in order to increase the safety and the number of services available on board, such as advanced driver-assistance systems (ADAS). Cars already have a complex computer network, with electronic control units (ECUs) connected to each other and receiving information from many sensors. The ECUs transfer an important heat power to the environment, while proper operating conditions need to be provided to ensure their reliability at high and low temperature, vibration and humidity. In a car cabin, electronic devices are usually placed in the compartment under the dashboard, an enclosed space designed for functional purposes. In the early stages of the vehicle design it has become necessary to analyse the environment under dashboard, by the use of Computational Fluid Dynamics (CFD) simulations and measurements. This paper presents the cooling of heat sinks by natural convection, a thermal and fluid simulation of the environment under the dashboard compared with test data.
Computational Modeling of Hydrodynamics and Scour around Underwater Munitions
NASA Astrophysics Data System (ADS)
Liu, X.; Xu, Y.
2017-12-01
Munitions deposited in water bodies are a big threat to human health, safety, and environment. It is thus imperative to predict the motion and the resting status of the underwater munitions. A multitude of physical processes are involved, which include turbulent flows, sediment transport, granular material mechanics, 6 degree-of-freedom motion of the munition, and potential liquefaction. A clear understanding of this unique physical setting is currently lacking. Consequently, it is extremely hard to make reliable predictions. In this work, we present the computational modeling of two importance processes, i.e., hydrodynamics and scour, around munition objects. Other physical processes are also considered in our comprehensive model. However, they are not shown in this talk. To properly model the dynamics of the deforming bed and the motion of the object, an immersed boundary method is implemented in the open source CFD package OpenFOAM. Fixed bed and scour cases are simulated and compared with laboratory experiments. The future work of this project will implement the coupling between all the physical processes.
Computer-based mechanical design of overhead lines
NASA Astrophysics Data System (ADS)
Rusinaru, D.; Bratu, C.; Dinu, R. C.; Manescu, L. G.
2016-02-01
Beside the performance, the safety level according to the actual standards is a compulsory condition for distribution grids’ operation. Some of the measures leading to improvement of the overhead lines reliability ask for installations’ modernization. The constraints imposed to the new lines components refer to the technical aspects as thermal stress or voltage drop, and look for economic efficiency, too. The mechanical sizing of the overhead lines is after all an optimization problem. More precisely, the task in designing of the overhead line profile is to size poles, cross-arms and stays and locate poles along a line route so that the total costs of the line's structure to be minimized and the technical and safety constraints to be fulfilled.The authors present in this paper an application for the Computer-Based Mechanical Design of the Overhead Lines and the features of the corresponding Visual Basic program, adjusted to the distribution lines. The constraints of the optimization problem are adjusted to the existing weather and loading conditions of Romania. The outputs of the software application for mechanical design of overhead lines are: the list of components chosen for the line: poles, cross-arms, stays; the list of conductor tension and forces for each pole, cross-arm and stay for different weather conditions; the line profile drawings.The main features of the mechanical overhead lines design software are interactivity, local optimization function and high-level user-interface
Implementing a Microcontroller Watchdog with a Field-Programmable Gate Array (FPGA)
NASA Technical Reports Server (NTRS)
Straka, Bartholomew
2013-01-01
Reliability is crucial to safety. Redundancy of important system components greatly enhances reliability and hence safety. Field-Programmable Gate Arrays (FPGAs) are useful for monitoring systems and handling the logic necessary to keep them running with minimal interruption when individual components fail. A complete microcontroller watchdog with logic for failure handling can be implemented in a hardware description language (HDL.). HDL-based designs are vendor-independent and can be used on many FPGAs with low overhead.
Stern, Rachel J; Fernandez, Alicia; Jacobs, Elizabeth A; Neilands, Torsten B; Weech-Maldonado, Robert; Quan, Judy; Carle, Adam; Seligman, Hilary K
2012-09-01
Providing culturally competent care shows promise as a mechanism to reduce health care inequalities. Until the recent development of the Consumer Assessment of Healthcare Providers and Systems Cultural Competency Item Set (CAHPS-CC), no measures capturing patient-level experiences with culturally competent care have been suitable for broad-scale administration. We performed confirmatory factor analysis and internal consistency reliability analysis of CAHPS-CC among patients with type 2 diabetes (n=600) receiving primary care in safety-net clinics. CAHPS-CC domains were also correlated with global physician ratings. A 7-factor model demonstrated satisfactory fit (χ²₂₃₁=484.34, P<0.0001) with significant factor loadings at P<0.05. Three domains showed excellent reliability-Doctor Communication-Positive Behaviors (α=0.82), Trust (α=0.77), and Doctor Communication-Health Promotion (α=0.72). Four domains showed inadequate reliability either among Spanish speakers or overall (overall reliabilities listed): Doctor Communication-Negative Behaviors (α=0.54), Equitable Treatment (α=0.69), Doctor Communication-Alternative Medicine (α=0.52), and Shared Decision-Making (α=0.51). CAHPS-CC domains were positively and significantly correlated with global physician rating. Select CAHPS-CC domains are suitable for broad-scale administration among safety-net patients. Those domains may be used to target quality-improvement efforts focused on providing culturally competent care in safety-net settings.
A new method for computing the reliability of consecutive k-out-of-n:F systems
NASA Astrophysics Data System (ADS)
Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak
2016-01-01
In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
NASA Astrophysics Data System (ADS)
Valentine, Timothy E.; Leal, Luiz C.; Guber, Klaus H.
2002-12-01
The Department of Energy established the Nuclear Criticality Safety Program (NCSP) in response to the Recommendation 97-2 by the Defense Nuclear Facilities Safety Board. The NCSP consists of seven elements of which nuclear data measurements and evaluations is a key component. The intent of the nuclear data activities is to provide high resolution nuclear data measurements that are evaluated, validated, and formatted for use by the nuclear criticality safety community to provide improved and reliable calculations for nuclear criticality safety evaluations. High resolution capture, fission, and transmission measurements are performed at the Oak Ridge Electron Linear Accelerator (ORELA) to address the needs of the criticality safety community and to address known deficiencies in nuclear data evaluations. The activities at ORELA include measurements on both light and heavy nuclei and have been used to identify improvements in measurement techniques that greatly improve the measurement of small capture cross sections. The measurement activities at ORELA provide precise and reliable high-resolution nuclear data for the nuclear criticality safety community.
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
41 CFR 102-80.110 - What must an equivalent level of safety analysis indicate?
Code of Federal Regulations, 2014 CFR
2014-01-01
..., and reliability of all building systems impacting fire growth, occupant knowledge of the fire, and... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire Prevention Equivalent Level of Safety...
41 CFR 102-80.110 - What must an equivalent level of safety analysis indicate?
Code of Federal Regulations, 2013 CFR
2013-07-01
..., and reliability of all building systems impacting fire growth, occupant knowledge of the fire, and... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire Prevention Equivalent Level of Safety...
41 CFR 102-80.110 - What must an equivalent level of safety analysis indicate?
Code of Federal Regulations, 2011 CFR
2011-01-01
..., and reliability of all building systems impacting fire growth, occupant knowledge of the fire, and... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire Prevention Equivalent Level of Safety...
41 CFR 102-80.110 - What must an equivalent level of safety analysis indicate?
Code of Federal Regulations, 2012 CFR
2012-01-01
..., and reliability of all building systems impacting fire growth, occupant knowledge of the fire, and... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire Prevention Equivalent Level of Safety...
[Animal experimentation, computer simulation and surgical research].
Carpentier, Alain
2009-11-01
We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.
Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system
NASA Technical Reports Server (NTRS)
1974-01-01
A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.
Basis And Application Of The CARES/LIFE Computer Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Janosik, Lesley A.; Gyekenyesi, John P.; Powers, Lynn M.
1996-01-01
Report discusses physical and mathematical basis of Ceramics Analysis and Reliability Evaluation of Structures LIFE prediction (CARES/LIFE) computer program, described in "Program for Evaluation of Reliability of Ceramic Parts" (LEW-16018).
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing
None
2018-01-16
In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun, the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.
Space shuttle main engine controller
NASA Technical Reports Server (NTRS)
Mattox, R. M.; White, J. B.
1981-01-01
A technical description of the space shuttle main engine controller, which provides engine checkout prior to launch, engine control and monitoring during launch, and engine safety and monitoring in orbit, is presented. Each of the major controller subassemblies, the central processing unit, the computer interface electronics, the input electronics, the output electronics, and the power supplies are described and discussed in detail along with engine and orbiter interfaces and operational requirements. The controller represents a unique application of digital concepts, techniques, and technology in monitoring, managing, and controlling a high performance rocket engine propulsion system. The operational requirements placed on the controller, the extremely harsh operating environment to which it is exposed, and the reliability demanded, result in the most complex and rugged digital system ever designed, fabricated, and flown.
Combinatorial Reliability and Repair
1992-07-01
Press, Oxford, 1987. [2] G. Gordon and L. Traldi, Generalized activities and the Tutte polynomial, Discrete Math . 85 (1990), 167-176. [3] A. B. Huseby, A...Chromatic polynomials and network reliability, Discrete Math . 67 (1987), 57-79. [7] A. Satayanarayana and R. K. Wood, A linear-time algorithm for comput- ing...K-terminal reliability in series-parallel networks, SIAM J. Comput. 14 (1985), 818-832. [8] L. Traldi, Generalized activities and K-terminal reliability, Discrete Math . 96 (1991), 131-149. 4
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... covered at this meeting include: Cybersecurity best practices, ISP network protection practices... to Jeffery Goldthorp, Associate Chief for Cybersecurity and Communications Reliability Public Safety...
Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques
2008-10-01
Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.
Program for computer aided reliability estimation
NASA Technical Reports Server (NTRS)
Mathur, F. P. (Inventor)
1972-01-01
A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.
Final report for CCS cross-layer reliability visioning study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Dehon, Andre; Carter, Nicj
The geometric rate of improvement of transistor size and integrated circuit performance known as Moore's Law has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. Looking forward, increasing unpredictability threatens our ability to continue scaling integrated circuits at Moore's Law rates. As the transistors and wires that make up integrated circuits become smaller, they display both greater differences in behavior among devices designed to be identical and greater vulnerability to transient and permanent faults. Conventional design techniques expend energy to tolerate this unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. However, the rising energy costs needed to compensate for increasing unpredictability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor on integrated circuit performance and energy efficiency is a national concern. Reliability and energy consumption are both reaching key inflection points that, together, threaten to reduce or end the benefits of feature size reduction. To continue beneficial scaling, we must use a cross-layer, Jull-system-design approach to reliability. Unlike current systems, which charge every device a substantial energy tax in order to guarantee correct operation in spite of rare events, such as one high-threshold transistor in a billion or one erroneous gate evaluation in an hour of computation, cross-layer reliability schemes make reliability management a cooperative effort across the system stack, sharing information across layers so that they only expend energy on reliability when an error actually occurs. Figure 1 illustrates an example of such a system that uses a combination of information from the application and cheap architecture-level techniques to detect errors. When an error occurs, mechanisms at higher levels in the stack correct the error, efficiently delivering correct operation to the user in spite of errors at the device or circuit levels. In the realms of memory and communication, engineers have a long history of success in tolerating unpredictable effects such as fabrication variability, transient upsets, and lifetime wear using information sharing, limited redundancy, and cross-layer approaches that anticipate, accommodate, and suppress errors. Networks use a combination of hardware and software to guarantee end-toend correctness. Error-detection and correction codes use additional information to correct the most common errors, single-bit transmission errors. When errors occur that cannot be corrected by these codes, the network protocol requests re-transmission of one or more packets until the correct data is received. Similarly, computer memory systems exploit a cross-layer division of labor to achieve high performance with modest hardware. Rather than demanding that hardware alone provide the virtual memory abstraction, software page-fault and TLB-miss handlers allow a modest piece of hardware, the TLB, to handle the common-case operations on a cyc1e-by-cycle basis while infrequent misses are handled in system software. Unfortunately, mitigating logic errors is not as simple or as well researched as memory or communication systems. This lack of understanding has led to very expensive solutions. For example, triple-modular redundancy masks errors by triplicating computations in either time or area. This mitigation methods imposes a 200% increase in energy consumption for every operation, not just the uncommon failure cases. At a time when computation is rapidly becoming part of our critical civilian and military infrastructure and decreasing costsfor computation are fueling our economy and our well being, we cannot afford increasingly unreliable electronics or a stagnation in capabilities per dollar, watt, or cubic meter. If researchers are able to develop techniques that tolerate the growing unpredictability of silicon devices, Moore's Law scaling should continue until at least 2022. During this 12-year time period, transistors, which are the building blocks of electronic devices, will scale their dimensions (feature sizes) from 45nm to 4.5nm.« less
NASA Astrophysics Data System (ADS)
Boron, Sergiusz
2017-06-01
Operational safety of electrical machines and equipment depends, inter alia, on the hazards resulting from their use and on the scope of applied protective measures. The use of insufficient protection against existing hazards leads to reduced operational safety, particularly under fault conditions. On the other hand, excessive (in relation to existing hazards) level of protection may compromise the reliability of power supply. This paper analyses the explosion hazard created by earth faults in longwall power supply systems and evaluates existing protection equipment from the viewpoint of its protective performance, particularly in the context of explosion hazards, and also assesses its effect on the reliability of power supply.
The communicative construction of safety in wildland firefighting (Proceedings)
Jody Jahn
2012-01-01
This dissertation project used a two-study mixed methods approach, examining the communicative accomplishment of safety from two perspectives: high reliability organizing (Weick, Sutcliffe, & Obstfeld 1999), and safety climate (Zohar 1980). In Study One, 27 firefighters from two functionally similar wildland firefighting crews were interviewed about their crew-...
Development and Implementation of a Food Safety Knowledge Instrument
ERIC Educational Resources Information Center
Byrd-Bredbenner, Carol; Wheatley, Virginia; Schaffner, Donald; Bruhn, Christine; Blalock, Lydia; Maurer, Jaclyn
2007-01-01
Little is known about the food safety knowledge of young adults. In addition, few knowledge questionnaires and no comprehensive, criterion-referenced measure that assesses the full range of food safety knowledge could be identified. Without appropriate, valid, and reliable measures and baseline data, it is difficult to develop and implement…
30 CFR 56.15006 - Protective equipment and clothing for hazards and irritants.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and irritants. 56.15006 Section 56.15006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND... and reliable condition and used whenever hazards of process or environment, chemical hazards...
30 CFR 56.15006 - Protective equipment and clothing for hazards and irritants.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and irritants. 56.15006 Section 56.15006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND... and reliable condition and used whenever hazards of process or environment, chemical hazards...
30 CFR 56.15006 - Protective equipment and clothing for hazards and irritants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and irritants. 56.15006 Section 56.15006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND... and reliable condition and used whenever hazards of process or environment, chemical hazards...
30 CFR 56.15006 - Protective equipment and clothing for hazards and irritants.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and irritants. 56.15006 Section 56.15006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND... and reliable condition and used whenever hazards of process or environment, chemical hazards...
30 CFR 56.15006 - Protective equipment and clothing for hazards and irritants.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and irritants. 56.15006 Section 56.15006 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND... and reliable condition and used whenever hazards of process or environment, chemical hazards...
The Development of Laboratory Safety Questionnaire for Middle School Science Teachers
ERIC Educational Resources Information Center
Akpullukcu, Simge; Cavas, Bulent
2017-01-01
The purpose of this paper is to develop a "valid and reliable laboratory safety questionnaire" which could be used to identify science teachers' understanding about laboratory safety issues during their science laboratory activities. The questionnaire was developed from a literature review and prior instruments developed on laboratory…
Vrkljan, Brenda H; Anaby, Dana
2011-02-01
Certain vehicle features can help drivers avoid collisions and/or protect occupants in the event of a crash, and therefore, might play an important role when deciding which vehicle to purchase. The objective of this study was to examine the importance attributed to key vehicle features (including safety) that drivers consider when buying a car and its association with age and gender. A sample of 2,002 Canadian drivers aged 18 years and older completed a survey that asked them to rank the importance of eight vehicle features if they were to purchase a vehicle (storage, mileage, safety, price, comfort, performance, design, and reliability). ANOVA tests were performed to: (a) determine if there were differences in the level of importance between features and; (b) examine the effect of age and gender on the importance attributed to these features. Of the features examined, safety and reliability were the most highly rated in terms of importance, whereas design and performance had the lowest rating. Differences in safety and performance across age groups were dependent on gender. This effect was most evident in the youngest and oldest age groups. Safety and reliability were considered the most important features. Age and gender play a significant role in explaining the importance of certain features. Targeted efforts for translating safety-related information to the youngest and oldest consumers should be emphasized due to their high collision, injury, and fatality rates. Copyright © 2011 National Safety Council and Elsevier Ltd. All rights reserved.
Children's Hospitals' Solutions for Patient Safety Collaborative Impact on Hospital-Acquired Harm.
Lyren, Anne; Brilli, Richard J; Zieker, Karen; Marino, Miguel; Muething, Stephen; Sharek, Paul J
2017-09-01
To determine if an improvement collaborative of 33 children's hospitals focused on reliable best practice implementation and culture of safety improvements can reduce hospital-acquired conditions (HACs) and serious safety events (SSEs). A 3-year prospective cohort study design with a 12-month historical control population was completed by the Children's Hospitals' Solutions for Patient Safety collaborative. Identification and dissemination of best practices related to 9 HACs and SSE reduction focused on key process and culture of safety improvements. Individual hospital improvement teams leveraged the resources of a large, structured children's hospital collaborative using electronic, virtual, and in-person interactions. Thirty-three children's hospitals from across the United States volunteered to be part of the Children's Hospitals' Solutions for Patient Safety collaborative. Thirty-two met all the data submission eligibility requirements for the HAC improvement objective of this study, and 21 participated in the high-reliability culture work aimed at reducing SSEs. Significant harm reduction occurred in 8 of 9 common HACs (range 9%-71%; P < .005 for all). The mean monthly SSE rate decreased 32% (from 0.77 to 0.52; P < .001). The 12-month rolling average SSE rate decreased 50% (from 0.82 to 0.41; P < .001). Participation in a structured collaborative dedicated to implementing HAC-related best-practice prevention bundles and culture of safety interventions designed to increase the use of high-reliability organization practices resulted in significant HAC and SSE reductions. Structured collaboration and rapid sharing of evidence-based practices and tools are effective approaches to decreasing hospital-acquired harm. Copyright © 2017 by the American Academy of Pediatrics.
Kobuse, Hiroe; Morishima, Toshitaka; Tanaka, Masayuki; Murakami, Genki; Hirose, Masahiro; Imanaka, Yuichi
2014-06-01
To develop a reliable and valid questionnaire that can distinguish features of organizational culture for patient safety across subgroups such as hospitals, professions, management/non-management positions and units/wards. We developed a Hospital Organizational Culture Questionnaire based on a conceptual framework incorporating items from a review of existing literature. The questionnaire was administered to hospital staff including doctors, nurses, allied health personnel, and administrative staff at six public hospitals in Japan. Reliability and validity were assessed through exploratory factor analysis, multitrait scaling analysis, Cronbach's alpha coefficient and multiple regression analysis using staff-perceived achievement of safety as the response variable. Discriminative power across subgroups was assessed with radar chart profiling. Of the 3304 hospital staff surveyed, 2924 (88.5%) responded. After exploratory factor analysis and multitrait analysis, the finalized questionnaire was composed of 24 items in the following eight dimensions: improvement orientation, passion for mission, professional growth, resource allocation prioritization, inter-sectional collaboration, responsibility and authority, teamwork, and information sharing. Construct validity and internal consistency of dimensions were confirmed with multitrait analysis and Cronbach's alpha coefficients, respectively. Multiple regression analysis showed that improvement orientation, passion for mission, resource allocation prioritization and information sharing were significantly associated with higher achievement in safety practices. Our questionnaire tool was able to distinguish features of safety culture among different subgroups. Our questionnaire demonstrated excellent validity and reliability, and revealed distinct cultural patterns among different subgroups. Quantitative assessment of organizational safety culture with this tool may further the understanding of associated characteristics of each subgroup and provide insight into organizational readiness for patient safety improvement. © 2014 John Wiley & Sons, Ltd.
Christiansen, C; Abreu, B; Ottenbacher, K; Huffman, K; Masel, B; Culpepper, R
1998-08-01
This report describes a reliability study using a prototype computer-simulated virtual environment to assess basic daily living skills in a sample of persons with traumatic brain injury (TBI). The benefits of using virtual reality in training for situations where safety is a factor have been established in defense and industry, but have not been demonstrated in rehabilitation. Thirty subjects with TBI receiving comprehensive rehabilitation services at a residential facility. An immersive virtual kitchen was developed in which a meal preparation task involving multiple steps could be performed. The prototype was tested using subjects who completed the task twice within 7 days. The stability of performance was estimated using intraclass correlation coefficients (ICCs). The ICC value for total performance based on all steps involved in the meal preparation task was .73. When three items with low variance were removed the ICC improved to .81. Little evidence of vestibular optical side-effects was noted in the subjects tested. Adequate initial reliability exists to continue development of the environment as an assessment and training prototype for persons with brain injury.
Fuel cells provide a revenue-generating solution to power quality problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, J.M. Jr.
Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less
Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis
NASA Technical Reports Server (NTRS)
Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William
2009-01-01
This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).
The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.
de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M
2017-06-01
This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation patterns with the measured safety outcomes. Finally, results provided evidence of discriminant validity between the SCEQ and safety climate. We conclude that the SCEQ is a valid, reliable instrument supported by a theoretical framework, and it is useful to measure the enactment of safety culture in NPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computational toxicity in 21st century safety sciences (China ...
presentation at the Joint Meeting of Analytical Toxicology and Computational Toxicology Committee (Chinese Society of Toxicology) International Workshop on Advanced Chemical Safety Assessment Technologies on 11 May 2016, Fuzhou University, Fuzhou China presentation at the Joint Meeting of Analytical Toxicology and Computational Toxicology Committee (Chinese Society of Toxicology) International Workshop on Advanced Chemical Safety Assessment Technologies on 11 May 2016, Fuzhou University, Fuzhou China
NASA Technical Reports Server (NTRS)
1994-01-01
This document is the product of the KSC Survey and Audit Working Group composed of civil service and contractor Safety, Reliability, and Quality Assurance (SR&QA) personnel. The program described herein provides standardized terminology, uniformity of survey and audit operations, and emphasizes process assessments rather than a program based solely on compliance. The program establishes minimum training requirements, adopts an auditor certification methodology, and includes survey and audit metrics for the audited organizations as well as the auditing organization.
Reliability, Safety and Error Recovery for Advanced Control Software
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2003-01-01
For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1988-01-01
The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.
NASA human factors programmatic overview
NASA Technical Reports Server (NTRS)
Connors, Mary M.
1992-01-01
Human factors addresses humans in their active and interactive capacities, i.e., in the mental and physical activities that they perform and in the contributions they make to achieving the goals of the mission. The overall goal of space human factors in NASA is to support the safety, productivity, and reliability of both the on-board crew and the ground support staff. Safety and reliability are fundamental requirements that human factors shares with other disciplines, while productivity represents the defining contribution of the human factors discipline.
Improved fault tolerance for air bag release in automobiles
NASA Astrophysics Data System (ADS)
Yeshwanth Kumar, C. H.; Prudhvi Prasad, P.; Uday Shankar, M.; Shanmugasundaram, M.
2017-11-01
In order to increase the reliability of the airbag system in automobiles which in turn increase the safety of the automobile we require improved airbag release system, our project deals with Triple Modular Redundancy (TMR) Technique where we use either three Sensors interfaced with three Microcontrollers which given as input to the software voter which produces majority output which is feed to the air compressor for releasing airbag. This concept was being used, in this project we are increasing reliability and safety of the entire system.
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
14 CFR 415.123 - Computing systems and software.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
Patient safety: Needs and initiatives.
Bion, Julian
2008-04-01
Patient safety has become a major defining issue for healthcare at the beginning of the 21(st) century. Viewed from the perspective of reliability of delivery of best practice, healthcare systems demonstrate a degree of imperfection which would not be tolerated in industry. In part, this is because of uncertainty about what constitutes best practice, combined with complex interventions in complex systems. The acutely ill patient is particularly challenging, and as the majority of admissions to hospitals are emergencies, it makes sense to focus on this group as a coherent entity. Changing clinical behavior is central to improving safety, and this requires a systems-wide approach integrating care throughout patient journey, combined with incorporating reliability training in life-long learning.
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
Abusive behavior is barrier to high-reliability health care systems, culture of patient safety.
Cassirer, C; Anderson, D; Hanson, S; Fraser, H
2000-11-01
Addressing abusive behavior in the medical workplace presents an important opportunity to deliver on the national commitment to improve patient safety. Fundamentally, the issue of patient safety and the issue of abusive behavior in the workplace are both about harm. Undiagnosed and untreated, abusive behavior is a barrier to creating high reliability service delivery systems that ensure patient safety. Health care managers and clinicians need to improve their awareness, knowledge, and understanding of the issue of workplace abuse. The available research suggests there is a high prevalence of workplace abuse in medicine. Both administrators at the blunt end and clinicians at the sharp end should consider learning new approaches to defining and treating the problem of workplace abuse. Eliminating abusive behavior has positive implications for preventing and controlling medical injury and improving organizational performance.
NASA Astrophysics Data System (ADS)
Dulo, D. A.
Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.
The predictive validity of safety climate.
Johnson, Stephen E
2007-01-01
Safety professionals have increasingly turned their attention to social science for insight into the causation of industrial accidents. One social construct, safety climate, has been examined by several researchers [Cooper, M. D., & Phillips, R. A. (2004). Exploratory analysis of the safety climate and safety behavior relationship. Journal of Safety Research, 35(5), 497-512; Gillen, M., Baltz, D., Gassel, M., Kirsch, L., & Vacarro, D. (2002). Perceived safety climate, job Demands, and coworker support among union and nonunion injured construction workers. Journal of Safety Research, 33(1), 33-51; Neal, A., & Griffin, M. A. (2002). Safety climate and safety behaviour. Australian Journal of Management, 27, 66-76; Zohar, D. (2000). A group-level model of safety climate: Testing the effect of group climate on microaccidents in manufacturing jobs. Journal of Applied Psychology, 85(4), 587-596; Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group-level climates. Journal of Applied Psychology, 90(4), 616-628] who have documented its importance as a factor explaining the variation of safety-related outcomes (e.g., behavior, accidents). Researchers have developed instruments for measuring safety climate and have established some degree of psychometric reliability and validity. The problem, however, is that predictive validity has not been firmly established, which reduces the credibility of safety climate as a meaningful social construct. The research described in this article addresses this problem and provides additional support for safety climate as a viable construct and as a predictive indicator of safety-related outcomes. This study used 292 employees at three locations of a heavy manufacturing organization to complete the 16 item Zohar Safety Climate Questionnaire (ZSCQ) [Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group-level climates. Journal of Applied Psychology, 90(4), 616-628]. In addition, safety behavior and accident experience data were collected for 5 months following the survey and were statistically analyzed (structural equation modeling, confirmatory factor analysis, exploratory factor analysis, etc.) to identify correlations, associations, internal consistency, and factorial structures. Results revealed that the ZSCQ: (a) was psychometrically reliable and valid, (b) served as an effective predictor of safety-related outcomes (behavior and accident experience), and (c) could be trimmed to an 11 item survey with little loss of explanatory power. Practitioners and researchers can use the ZSCQ with reasonable certainty of the questionnaire's reliability and validity. This provides a solid foundation for the development of meaningful organizational interventions and/or continued research into social factors affecting industrial accident experience.
Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection
NASA Technical Reports Server (NTRS)
Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.
2009-01-01
The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.
Computing design principles for robotic telescopes
NASA Astrophysics Data System (ADS)
Bowman, Mark K.; Ford, Martyn J.; Lett, Robert D. J.; McKay, Derek J.; Mücke-Herzberg, Dorothy; Norbury, Martin A.
2002-12-01
Telescopes capable of making observing decisions independent of human supervision have become a reality in the 21st century. These new telescopes are likely to replace automated systems as the telescopes of choice. A fully robotic implementation offers not only reduced operating costs, but also significant gains in scientific output over automated or remotely operated systems. The design goals are to maximise the telescope operating time and minimise the cost of diagnosis and repair. However, the demands of a robotic telescope greatly exceed those of its remotely operated counterpart, and the design of the computing system is key to its operational performance. This paper outlines the challenges facing the designer of these computing systems, and describes some of the principles of design which may be applied. Issues considered include automatic control and efficiency, system awareness, robustness and reliability, access, security and safety, as well as ease-of-use and maintenance. These requirements cannot be considered simply within the context of the application software. Hence, this paper takes into account operating system, hardware and environmental issues. Consideration is also given to accommodating different levels of manual control within robotic telescopes, as well as methods of accessing and overriding the system in the event of failure.
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
Mittal, Sparsh; Vetter, Jeffrey S.
2015-04-24
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
Fault-tolerant building-block computer study
NASA Technical Reports Server (NTRS)
Rennels, D. A.
1978-01-01
Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.
Development and Piloting of a Food Safety Audit Tool for the Domestic Environment.
Borrusso, Patricia; Quinlan, Jennifer J
2013-12-04
Research suggests that consumers often mishandle food in the home based on survey and observation studies. There is a need for a standardized tool for researchers to objectively evaluate the prevalence and identify the nature of food safety risks in the domestic environment. An audit tool was developed to measure compliance with recommended sanitation, refrigeration and food storage conditions in the domestic kitchen. The tool was piloted by four researchers who independently completed the inspection in 22 homes. Audit tool questions were evaluated for reliability using the κ statistic. Questions that were not sufficiently reliable (κ < 0.5) or did not provide direct evidence of risk were revised or eliminated from the final tool. Piloting the audit tool found good reliability among 18 questions, 6 questions were revised and 28 eliminated, resulting in a final 24 question tool. The audit tool was able to identify potential food safety risks, including evidence of pest infestation (27%), incorrect refrigeration temperature (73%), and lack of hot water (>43 °C, 32%). The audit tool developed here provides an objective measure for researchers to observe and record the most prevalent food safety risks in consumer's kitchens and potentially compare risks among consumers of different demographics.
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
ERIC Educational Resources Information Center
Islam, Muhammad Faysal
2013-01-01
Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
Schmitz, Kathryn H; Harnack, Lisa; Fulton, Janet E; Jacobs, David R; Gao, Shujun; Lytle, Leslie A; Van Coevering, Pam
2004-11-01
Sedentary behaviors, like television viewing, are positively associated with overweight among young people. To monitor national health objectives for sedentary behaviors in young adolescents, this project developed and assessed the reliability and validity of a brief questionnaire to measure weekly television viewing, usual television viewing, and computer use by middle school children. Reliability and validity of the Youth Risk Behavior Survey (YRBS) question on weekday television viewing also were examined. A brief, five-item television and computer use questionnaire was completed twice by 245 middle school children with one week apart. To concurrently assess validity, students also completed television and computer use logs for seven days. Among all students, Spearman correlations for test-retest reliability for television viewing and computer use ranged from 0.55 to 0.68. Spearman correlations between the first questionnaire and the seven-day log produced the following results: YRBS question for weekday television viewing (0.46), weekend television viewing (0.37), average television viewing over the week (0.47), and computer use (0.39). Methods comparison analysis showed a mean difference (hours/week) between answers to questionnaire items and the log of -0.04 (1.70 standard deviation [SD]) hours for weekday television, -0.21 (2.54 SD) for weekend television, -0.09 (1.75 SD) for average television over the week, and 0.68 (1.26 SD) for computer use. The YRBS weekday television viewing question, and the newly developed questions to assess weekend television viewing, average television viewing, and computer use, produced adequate reliability and validity for surveillance of middle school students.
Testing the reliability and validity of a measure of safety climate.
Anderson, E; McGovern, P M; Kochevar, L; Vesley, D; Gershon, R
2000-01-01
The lack of compliance with universal precautions (UP) is well documented across a wide variety of healthcare professions and has been reported both before and after the enactment of the Occupational Safety and Health Administration's Bloodborne Pathogens Standard. Gershon, Karkashian, and Felknor (1994) found that several factors correlated significantly with healthcare workers' lack of compliance with UP, including a measure of organizational safety climate (e.g., the employees' perception of their organizational culture and practices regarding safety). We conducted a secondary analysis using data from a cross-sectional survey of a convenience sample of 1,746 healthcare workers at risk of occupational exposure to bloodborne pathogens to assess the validity and reliability of Gershon's measure of safety climate. Findings revealed no relationship between safety climate and employees' gender, age, education, tenure in position, profession, hours worked per day, perceived risk, attitude toward risk, and training. An association was demonstrated between safety climate and (1) healthcare worker compliance with UP and (2) the availability of personal protective equipment, providing support for the construct validity of this measure of safety climate. These findings could be used by occupational health professionals to assess employees' perceptions of the safety culture and practices in the workplace and to guide the institution's risk management efforts in association with U.P.
Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko
2006-10-11
In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).
Memorial Hermann: high reliability from board to bedside.
Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire
2013-06-01
In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.
NASA Technical Reports Server (NTRS)
Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.
1990-01-01
A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subsystem, system, or vessel to determine the least critical consequence. (b) All automatic control, remote control, safety control, and alarm systems must be failsafe. ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety...
NASA Astrophysics Data System (ADS)
Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok
Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chokchai "Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
75 FR 22806 - Agency Information Collection Request; 60-Day Public Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-30
... Safety and Availability. Abstract: The NBCUS is a biennial survey of the blood collection and utilization... safety of all blood products. The objective of the NBCUS is to produce reliable and accurate estimates of national and regional collections, utilization, and safety of all blood products--red blood cells, fresh...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-20
... NUCLEAR REGULATORY COMMISSION [NRC-2013-0098] Embedded Digital Devices in Safety-Related Systems... (NRC) is issuing for public comment Draft Regulatory Issue Summary (RIS) 2013-XX, ``Embedded Digital... requirements for the quality and reliability of basic components with embedded digital devices. DATES: Submit...
Overcoming dysfunctional momentum: Organizational safety as a social achievement
Michelle A. Barton; Kathleen M. Sutcliffe
2009-01-01
Research on organizational safety and reliability largely has emphasized system-level structures and processes neglecting the more micro-level, social processes necessary to enact organizational safety. In this qualitative study we remedy this gap by exploring these processes in the context of wildland fire management. In particular, using interview data gathered from...
AHRQ's hospital survey on patient safety culture: psychometric analyses.
Blegen, Mary A; Gearhart, Susan; O'Brien, Roxanne; Sehgal, Niraj L; Alldredge, Brian K
2009-09-01
This project analyzed the psychometric properties of the Agency for Healthcare Research and Quality Hospital Survey on Patient Safety Culture (HSOPSC) including factor structure, interitem reliability and intraclass correlations, usefulness for assessment, predictive validity, and sensitivity. The survey was administered to 454 health care staff in 3 hospitals before and after a series of multidisciplinary interventions designed to improve safety culture. Respondents (before, 434; after, 368) included nurses, physicians, pharmacists, and other hospital staff members. Factor analysis partially confirmed the validity of the HSOPSC subscales. Interitem consistency reliability was above 0.7 for 5 subscales; the staffing subscale had the lowest reliability coefficients. The intraclass correlation coefficients, agreement among the members of each unit, were within recommended ranges. The pattern of high and low scores across the subscales of the HSOPSC in the study hospitals were similar to the sample of Pacific region hospitals reported by the Agency for Healthcare Research and Quality and corresponded to the proportion of items in each subscale that are worded negatively (reverse scored). Most of the unit and hospital dimensions were correlated with the Safety Grade outcome measure in the tool. Overall, the tool was shown to have moderate-to-strong validity and reliability, with the exception of the staffing subscale. The usefulness in assessing areas of strength and weakness for hospitals or units among the culture subscales is questionable. The culture subscales were shown to correlate with the perceived outcomes, but further study is needed to determine true predictive validity.
Reliability and safety of the electrical power supply complex of the Hanford production reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robbins, F.D.
Safety has been and must continue to be the inviolable modulus by which the operation of a nuclear reactor must be judged. A malfunction in any reactor may well result in a release of fission products which may dissipate over a wide geographical area. Such dissipation may place the health, happiness and even the lives of the people in the region in serious jeopardy. As a result, the property damage and liability cost may reach astronomical values in the order of magnitude of billions of dollars. Reliability of the electrical network is an indispensable factor in attaining a high ordermore » of safety assurance. Progress in the peaceful use of atomic energy may take the form of electrical power generation using the nuclear reactor as a source of thermal energy. In view of these factors it seems appropriate and profitable that a critical engineering study be made of the safety and reliability of the Hanford reactors without regard to cost economics. This individual and independent technical engineering analysis was made without regard to Hanford traditional engineering and administration assignments. The main objective has been to focus attention on areas which seem to merit further detailed study on conditions which seem to need adjustment but most of all on those changes which will improve reactor safety. This report is the result of such a study.« less
INFLUENCES OF RESPONSE RATE AND DISTRIBUTION ON THE CALCULATION OF INTEROBSERVER RELIABILITY SCORES
Rolider, Natalie U.; Iwata, Brian A.; Bullock, Christopher E.
2012-01-01
We examined the effects of several variations in response rate on the calculation of total, interval, exact-agreement, and proportional reliability indices. Trained observers recorded computer-generated data that appeared on a computer screen. In Study 1, target responses occurred at low, moderate, and high rates during separate sessions so that reliability results based on the four calculations could be compared across a range of values. Total reliability was uniformly high, interval reliability was spuriously high for high-rate responding, proportional reliability was somewhat lower for high-rate responding, and exact-agreement reliability was the lowest of the measures, especially for high-rate responding. In Study 2, we examined the separate effects of response rate per se, bursting, and end-of-interval responding. Response rate and bursting had little effect on reliability scores; however, the distribution of some responses at the end of intervals decreased interval reliability somewhat, proportional reliability noticeably, and exact-agreement reliability markedly. PMID:23322930
NASA Astrophysics Data System (ADS)
Sil, Arjun; Longmailai, Thaihamdau
2017-09-01
The lateral displacement of Reinforced Concrete (RC) frame building during an earthquake has an important impact on the structural stability and integrity. However, seismic analysis and design of RC building needs more concern due to its complex behavior as the performance of the structure links to the features of the system having many influencing parameters and other inherent uncertainties. The reliability approach takes into account the factors and uncertainty in design influencing the performance or response of the structure in which the safety level or the probability of failure could be ascertained. This present study, aims to assess the reliability of seismic performance of a four storey residential RC building seismically located in Zone-V as per the code provisions given in the Indian Standards IS: 1893-2002. The reliability assessment performed by deriving an explicit expression for maximum roof-lateral displacement as a failure function by regression method. A total of 319, four storey RC buildings were analyzed by linear static method using SAP2000. However, the change in the lateral-roof displacement with the variation of the parameters (column dimension, beam dimension, grade of concrete, floor height and total weight of the structure) was observed. A generalized relation established by regression method which could be used to estimate the expected lateral displacement owing to those selected parameters. A comparison made between the displacements obtained from analysis with that of the equation so formed. However, it shows that the proposed relation could be used directly to determine the expected maximum lateral displacement. The data obtained from the statistical computations was then used to obtain the probability of failure and the reliability.
Using Computational Toxicology to Enable Risk-Based ...
presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.
Computer vision in the poultry industry
USDA-ARS?s Scientific Manuscript database
Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...
Finn, Jerry; Atkinson, Teresa
2009-11-01
The Technology Safety Project of the Washington State Coalition Against Domestic Violence was designed to increase awareness and knowledge of technology safety issues for domestic violence victims, survivors, and advocacy staff. The project used a "train-the-trainer" model and provided computer and Internet resources to domestic violence service providers to (a) increase safe computer and Internet access for domestic violence survivors in Washington, (b) reduce the risk posed by abusers by educating survivors about technology safety and privacy, and (c) increase the ability of survivors to help themselves and their children through information technology. Evaluation of the project suggests that the program is needed, useful, and effective. Consumer satisfaction was high, and there was perceived improvement in computer confidence and knowledge of computer safety. Areas for future program development and further research are discussed.
Göras, Camilla; Wallentin, Fan Yang; Nilsson, Ulrica; Ehrenberg, Anna
2013-03-19
Tens of millions of patients worldwide suffer from avoidable disabling injuries and death every year. Measuring the safety climate in health care is an important step in improving patient safety. The most commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ). The aim of the present study was to establish the validity and reliability of the translated version of the SAQ. The SAQ was translated and adapted to the Swedish context. The survey was then carried out with 374 respondents in the operating room (OR) setting. Data was received from three hospitals, a total of 237 responses. Cronbach's alpha and confirmatory factor analysis (CFA) was used to evaluate the reliability and validity of the instrument. The Cronbach's alpha values for each of the factors of the SAQ ranged between 0.59 and 0.83. The CFA and its goodness-of-fit indices (SRMR 0.055, RMSEA 0.043, CFI 0.98) showed good model fit. Intercorrelations between the factors safety climate, teamwork climate, job satisfaction, perceptions of management, and working conditions showed moderate to high correlation with each other. The factor stress recognition had no significant correlation with teamwork climate, perception of management, or job satisfaction. Therefore, the Swedish translation and psychometric testing of the SAQ (OR version) has good construct validity. However, the reliability analysis suggested that some of the items need further refinement to establish sound internal consistency. As suggested by previous research, the SAQ is potentially a useful tool for evaluating safety climate. However, further psychometric testing is required with larger samples to establish the psychometric properties of the instrument for use in Sweden.
Integrating Safety in Developing a Variable Speed Limit System
DOT National Transportation Integrated Search
2014-01-01
Disaggregate safety studies benefit from the reliable surveillance systems which provide detailed real-time traffic and weather data. This information could help in capturing microlevel influences of the hazardous factors which might lead to a crash....
Boada-Grau, Joan; Sánchez-García, José-Carlos; Prizmic-Kuzmica, Aldo-Javier; Vigil-Colet, Andreu
2012-03-01
In this article, we study the psychometric properties of a short scale (TRANS-18) which was designed to detect safe behaviors (personal and vehicle-related) and psychophysiological disorders. 244 drivers participated in the study, including drivers of freight transport vehicles (regular, dangerous and special), cranes, and passenger transport (regular transport and chartered coaches), ambulances and taxis. After carrying out an exploratory factor analysis of the scale, the findings show a structure comprised of three factors related to psychophysiological disorders, and to both personal and vehicle-related safety behaviors. Furthermore, these three factors had adequate reliability and all three also showed validity with regard to burnout, fatigue and job tension. In short, this scale may be ideally suited for adequately identifying the safety behaviors and safety problems of transport drivers. Future research could use the TRANS-18 as a screening tool in combination with other instruments.
Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2016-01-01
To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Software Safety Risk in Legacy Safety-Critical Computer Systems
NASA Technical Reports Server (NTRS)
Hill, Janice L.; Baggs, Rhoda
2007-01-01
Safety Standards contain technical and process-oriented safety requirements. Technical requirements are those such as "must work" and "must not work" functions in the system. Process-Oriented requirements are software engineering and safety management process requirements. Address the system perspective and some cover just software in the system > NASA-STD-8719.13B Software Safety Standard is the current standard of interest. NASA programs/projects will have their own set of safety requirements derived from the standard. Safety Cases: a) Documented demonstration that a system complies with the specified safety requirements. b) Evidence is gathered on the integrity of the system and put forward as an argued case. [Gardener (ed.)] c) Problems occur when trying to meet safety standards, and thus make retrospective safety cases, in legacy safety-critical computer systems.
Computational models for predicting interactions with membrane transporters.
Xu, Y; Shen, Q; Liu, X; Lu, J; Li, S; Luo, C; Gong, L; Luo, X; Zheng, M; Jiang, H
2013-01-01
Membrane transporters, including two members: ATP-binding cassette (ABC) transporters and solute carrier (SLC) transporters are proteins that play important roles to facilitate molecules into and out of cells. Consequently, these transporters can be major determinants of the therapeutic efficacy, toxicity and pharmacokinetics of a variety of drugs. Considering the time and expense of bio-experiments taking, research should be driven by evaluation of efficacy and safety. Computational methods arise to be a complementary choice. In this article, we provide an overview of the contribution that computational methods made in transporters field in the past decades. At the beginning, we present a brief introduction about the structure and function of major members of two families in transporters. In the second part, we focus on widely used computational methods in different aspects of transporters research. In the absence of a high-resolution structure of most of transporters, homology modeling is a useful tool to interpret experimental data and potentially guide experimental studies. We summarize reported homology modeling in this review. Researches in computational methods cover major members of transporters and a variety of topics including the classification of substrates and/or inhibitors, prediction of protein-ligand interactions, constitution of binding pocket, phenotype of non-synonymous single-nucleotide polymorphisms, and the conformation analysis that try to explain the mechanism of action. As an example, one of the most important transporters P-gp is elaborated to explain the differences and advantages of various computational models. In the third part, the challenges of developing computational methods to get reliable prediction, as well as the potential future directions in transporter related modeling are discussed.
Bradshaw, Catherine P; Milam, Adam J; Furr-Holden, C Debra M; Johnson, Sarah Lindstrom
2015-12-01
School safety is of great concern for prevention researchers, school officials, parents, and students, yet there are a dearth of assessments that have operationalized school safety from an organizational framework using objective tools and measures. Such a tool would be important for deriving unbiased assessments of the school environment, which in turn could be used as an evaluative tool for school violence prevention efforts. The current paper presents a framework for conceptualizing school safety consistent with Crime Prevention through Environmental Design (CPTED) model and social disorganization theory, both of which highlight the importance of context as a driver for adolescents' risk for involvement in substance use and violence. This paper describes the development of a novel observational measure, called the School Assessment for Environmental Typology (SAfETy), which applies CPTED and social disorganizational frameworks to schools to measure eight indicators of school physical and social environment (i.e., disorder, trash, graffiti/vandalism, appearance, illumination, surveillance, ownership, and positive behavioral expectations). Drawing upon data from 58 high schools, we provide preliminary data regarding the validity and reliability of the SAfETy and describe patterns of the school safety indicators. Findings demonstrate the reliability and validity of the SAfETy and are discussed with regard to the prevention of violence in schools.
Extreme Scale Computing to Secure the Nation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D L; McGraw, J R; Johnson, J R
2009-11-10
Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...
Improving Safety and Reliability of Space Auxiliary Power Units
NASA Technical Reports Server (NTRS)
Viterna, Larry A.
1998-01-01
Auxiliary Power Units (APU's) play a critical role in space vehicles. On the space shuttle, APU's provide the hydraulic power for the aerodynamic control surfaces, rocket engine gimballing, landing gear, and brakes. Future space vehicles, such as the Reusable Launch Vehicle, will also need APU's to provide electrical power for flight control actuators and other vehicle subsystems. Vehicle designers and mission managers have identified safety, reliability, and maintenance as the primary concerns for space APU's. In 1997, the NASA Lewis Research Center initiated an advanced technology development program to address these concerns.
Computer-aided design of polymers and composites
NASA Technical Reports Server (NTRS)
Kaelble, D. H.
1985-01-01
This book on computer-aided design of polymers and composites introduces and discusses the subject from the viewpoint of atomic and molecular models. Thus, the origins of stiffness, strength, extensibility, and fracture toughness in composite materials can be analyzed directly in terms of chemical composition and molecular structure. Aspects of polymer composite reliability are considered along with characterization techniques for composite reliability, relations between atomic and molecular properties, computer aided design and manufacture, polymer CAD/CAM models, and composite CAD/CAM models. Attention is given to multiphase structural adhesives, fibrous composite reliability, metal joint reliability, polymer physical states and transitions, chemical quality assurance, processability testing, cure monitoring and management, nondestructive evaluation (NDE), surface NDE, elementary properties, ionic-covalent bonding, molecular analysis, acid-base interactions, the manufacturing science, and peel mechanics.
DOT National Transportation Integrated Search
2016-12-01
An independent evaluation of a non-video-based onboard monitoring system (OBMS) was conducted. The objective was to determine if the OBMS system performed reliably, improved driving safety and performance, and improved fuel efficiency in a commercial...
DOT National Transportation Integrated Search
2016-11-01
An independent evaluation of a non-video-based onboard monitoring system (OBMS) was conducted. The objective was to determine if the OBMS system performed reliably, improved driving safety and performance, and improved fuel efficiency in a commercial...
School Safety in a Post-Sandy Hook World
ERIC Educational Resources Information Center
Trump, Kenneth S.
2014-01-01
In this report the author, who is a school safety expert, provides information about school safety in a post-Sandy Hook world. He presents the following: (1) Continuum of Threats and Responses; (2) The role social media plays; (3) Reliable Best Practices; (4) Policy and Funding--Climate and Context; (5) Policy and Funding--Things to Avoid; and (6)…
ERIC Educational Resources Information Center
Bradford, Traliece; Serrano, Elena L.; Cox, Ruby H.; Lambur, Michael
2010-01-01
Objective: To develop and assess reliability and validity of the Nutrition, Food Safety, and Physical Activity Checklist to measure nutrition, food safety, and physical activity practices among adult Expanded Food and Nutrition Education Program (EFNEP) and Food Stamp Nutrition Education program (FSNE) participants. Methods: Test-retest…
Review of AIDS development. [airborne computers for reliability engineering
NASA Technical Reports Server (NTRS)
Vermeulen, H. C.; Danielsson, S. G.
1981-01-01
The operation and implementation of the aircraft integrated data system AIDS are described. The system is described as an engineering tool with strong emphasis on analysis of recorded information. The AIDS is primarily directed to the monitoring of parameters related to: the safety of the flight; the performance of the aircraft; the performance of the flight guidance system; and the performance and condition of the engines. The system provide short term trend analysis on a trend chart that is updated by the flight engineer on every flight that lasts more than 4 flight hours. Engine data prints are automatically presented during take-off and in the case of limit excedance, e.g., the print shows an automatically reported impending hotstarts on engine nr. 1. Other significant features are reported.
Kullgren, A; Lie, A; Tingvall, C
1994-02-01
Vehicle deformations are important sources for information about the performance of safety systems. Photogrammetry has developed vastly under recent years. In this study modern photogrammetrical methods have been used for vehicle deformation analysis. The study describes the equipment for documentation and recording in the field (semi-metric camera), and a system for photogrammetrical measurements of the images in laboratory environment (personal computer and digitizing tablet). The material used is approximately 500 collected and measured cases. The study shows that the reliability is high and that accuracies around 15mm can be achieved even if the equipment and routines used are relatively simple. The effects of further development using video cameras for data capture and digital images for measurements are discussed.
Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-09-04
In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun,more » the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.« less
Multiprocessor shared-memory information exchange
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoline, L.L.; Bowers, M.D.; Crew, A.W.
1989-02-01
In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, ismore » designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange.« less
Insights into a microwave susceptible agent for minimally invasive microwave tumor thermal therapy.
Shi, Haitang; Liu, Tianlong; Fu, Changhui; Li, Linlin; Tan, Longfei; Wang, Jingzhuo; Ren, Xiangling; Ren, Jun; Wang, Jianxin; Meng, Xianwei
2015-03-01
This work develops a kind of sodium alginate (SA) microcapsules as microwave susceptible agents for in vivo tumor microwave thermal therapy for the first time. Due to the excellent microwave susceptible properties and low bio-toxicity, excellent therapy efficiency can be achieved with the tumor inhibiting ratio of 97.85% after one-time microwave thermal therapy with ultralow power (1.8 W, 450 MHz). Meanwhile, the mechanism of high microwave heating efficiency was confirmed via computer-simulated model in theory, demonstrating that the spatial confinement efficiency of microcapsule walls endows the inside ions with high microwave susceptible properties. This strategy offers tremendous potential applications in clinical tumor treatment with the benefits of safety, reliability, effectiveness and minimally invasiveness. Copyright © 2014 Elsevier Ltd. All rights reserved.
Flight Demonstrations of Orbital Space Plane (OSP) Technologies
NASA Technical Reports Server (NTRS)
Turner, Susan
2003-01-01
The Orbital Space Plane (OSP) Program embodies NASA s priority to transport Space Station crews safely, reliably, and affordably, while it empowers the Nation s greater strategies for scientific exploration and space leadership. As early in the development cycle as possible, the OSP will provide crew rescue capability, offering an emergency ride home from the Space Station, while accommodating astronauts who are deconditioned due to long- duration missions, or those that may be ill or injured. As the OSP Program develops a fully integrated system, it will use existing technologies and employ computer modeling and simulation. Select flight demonstrator projects will provide valuable data on launch, orbital, reentry, and landing conditions to validate thermal protection systems, autonomous operations, and other advancements, especially those related to crew safety and survival.
Morrongiello, Barbara A; Schwebel, David C; Bell, Melissa; Stewart, Julia; Davis, Aaron L
2012-07-01
Fire is a leading cause of unintentional injury and, although young children are at particularly increased risk, there are very few evidence-based resources available to teach them fire safety knowledge and behaviors. Using a pre-post randomized design, the current study evaluated the effectiveness of a computer game (The Great Escape) for teaching fire safety information to young children (3.5-6 years). Using behavioral enactment procedures, children's knowledge and behaviors related to fire safety were compared to a control group of children before and after receiving the intervention. The results indicated significant improvements in knowledge and fire safety behaviors in the intervention group but not the control. Using computer games can be an effective way to promote young children's understanding of safety and how to react in different hazardous situations.
2016-09-01
an instituted safety program that utilizes a generic risk assessment method involving the 5-M (Mission, Man, Machine , Medium and Management) factor...the Safety core value is hinged upon three key principles—(1) each soldier has a crucial part to play, by adopting safety as a core value and making...it a way of life in his unit; (2) safety is an integral part of training, operations and mission success, and (3) safety is an individual, team and
Reliability-based evaluation of bridge components for consistent safety margins.
DOT National Transportation Integrated Search
2010-10-01
The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...
Testing and evaluation of pedestrian sensors
DOT National Transportation Integrated Search
2007-09-01
The foundation for several pedestrian safety measures is reliable and accurate detection of pedestrians. The main objective of this study was to evaluate sensors for use in a pedestrian safety test bed in College Station, TX. The following sensors we...
Quality evaluation of poultry carcasses
USDA-ARS?s Scientific Manuscript database
The USDA Food Safety Inspection Service (FSIS) has been mandated to organoleptically inspect poultry carcasses online at processing plants. For poultry quality and safety evaluation, the development of accurate and reliable instruments for online detection of unwholesomeness such as septicemia, cada...
Koho, P; Aho, S; Kautiainen, H; Pohjolainen, T; Hurri, H
2014-12-01
To estimate the internal consistency, test-retest reliability and comparability of paper and computer versions of the Finnish version of the Tampa Scale of Kinesiophobia (TSK-FIN) among patients with chronic pain. In addition, patients' personal experiences of completing both versions of the TSK-FIN and preferences between these two methods of data collection were studied. Test-retest reliability study. Paper and computer versions of the TSK-FIN were completed twice on two consecutive days. The sample comprised 94 consecutive patients with chronic musculoskeletal pain participating in a pain management or individual rehabilitation programme. The group rehabilitation design consisted of physical and functional exercises, evaluation of the social situation, psychological assessment of pain-related stress factors, and personal pain management training in order to regain overall function and mitigate the inconvenience of pain and fear-avoidance behaviour. The mean TSK-FIN score was 37.1 [standard deviation (SD) 8.1] for the computer version and 35.3 (SD 7.9) for the paper version. The mean difference between the two versions was 1.9 (95% confidence interval 0.8 to 2.9). Test-retest reliability was 0.89 for the paper version and 0.88 for the computer version. Internal consistency was considered to be good for both versions. The intraclass correlation coefficient for comparability was 0.77 (95% confidence interval 0.66 to 0.85), indicating substantial reliability between the two methods. Both versions of the TSK-FIN demonstrated substantial intertest reliability, good test-retest reliability, good internal consistency and acceptable limits of agreement, suggesting their suitability for clinical use. However, subjects tended to score higher when using the computer version. As such, in an ideal situation, data should be collected in a similar manner throughout the course of rehabilitation or clinical research. Copyright © 2014 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J
2006-04-03
There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.
Spatial Brain Control Interface using Optical and Electrophysiological Measures
2013-08-27
appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 27-08-2013 13...Machine (LSVM) was the most appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method was applied to the imaging data...local field potentials proved to be fast and strongly tuned for the spatial parameters of the task. Thus, a reliable BCI that can predict upcoming
NASA Astrophysics Data System (ADS)
Gurov, V. V.
2017-01-01
Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.
The system of technical diagnostics of the industrial safety information network
NASA Astrophysics Data System (ADS)
Repp, P. V.
2017-01-01
This research is devoted to problems of safety of the industrial information network. Basic sub-networks, ensuring reliable operation of the elements of the industrial Automatic Process Control System, were identified. The core tasks of technical diagnostics of industrial information safety were presented. The structure of the technical diagnostics system of the information safety was proposed. It includes two parts: a generator of cyber-attacks and the virtual model of the enterprise information network. The virtual model was obtained by scanning a real enterprise network. A new classification of cyber-attacks was proposed. This classification enables one to design an efficient generator of cyber-attacks sets for testing the virtual modes of the industrial information network. The numerical method of the Monte Carlo (with LPτ - sequences of Sobol), and Markov chain was considered as the design method for the cyber-attacks generation algorithm. The proposed system also includes a diagnostic analyzer, performing expert functions. As an integrative quantitative indicator of the network reliability the stability factor (Kstab) was selected. This factor is determined by the weight of sets of cyber-attacks, identifying the vulnerability of the network. The weight depends on the frequency and complexity of cyber-attacks, the degree of damage, complexity of remediation. The proposed Kstab is an effective integral quantitative measure of the information network reliability.
Development and Piloting of a Food Safety Audit Tool for the Domestic Environment
Borrusso, Patricia; Quinlan, Jennifer J.
2013-01-01
Research suggests that consumers often mishandle food in the home based on survey and observation studies. There is a need for a standardized tool for researchers to objectively evaluate the prevalence and identify the nature of food safety risks in the domestic environment. An audit tool was developed to measure compliance with recommended sanitation, refrigeration and food storage conditions in the domestic kitchen. The tool was piloted by four researchers who independently completed the inspection in 22 homes. Audit tool questions were evaluated for reliability using the κ statistic. Questions that were not sufficiently reliable (κ < 0.5) or did not provide direct evidence of risk were revised or eliminated from the final tool. Piloting the audit tool found good reliability among 18 questions, 6 questions were revised and 28 eliminated, resulting in a final 24 question tool. The audit tool was able to identify potential food safety risks, including evidence of pest infestation (27%), incorrect refrigeration temperature (73%), and lack of hot water (>43 °C, 32%). The audit tool developed here provides an objective measure for researchers to observe and record the most prevalent food safety risks in consumer’s kitchens and potentially compare risks among consumers of different demographics. PMID:28239139
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... Documents Access and Management System (ADAMS): You may access publicly available documents online in the... Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Revision...
Digital avionics design and reliability analyzer
NASA Technical Reports Server (NTRS)
1981-01-01
The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.
Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.
Analysis of cost regression and post-accident absence
NASA Astrophysics Data System (ADS)
Wojciech, Drozd
2017-07-01
The article presents issues related with costs of work safety. It proves the thesis that economic aspects cannot be overlooked in effective management of occupational health and safety and that adequate expenditures on safety can bring tangible benefits to the company. Reliable analysis of this problem is essential for the description the problem of safety the work. In the article attempts to carry it out using the procedures of mathematical statistics [1, 2, 3].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, P. J.; Westwood, R.N; Mark, R. T.
2006-07-01
The Nuclear Installations Inspectorate (NII) of the UK's Health and Safety Executive (HSE) has completed a review of their Safety Assessment Principles (SAPs) for Nuclear Installations recently. During the period of the SAPs review in 2004-2005 the designers of future UK naval reactor plant were optioneering the control and protection systems that might be implemented. Because there was insufficient regulatory guidance available in the naval sector to support this activity the Defence Nuclear Safety Regulator (DNSR) invited the NII to collaborate with the production of a guidance document that provides clarity of regulatory expectations for the production of safety casesmore » for computer based safety systems. A key part of producing regulatory expectations was identifying the relevant extant standards and sector guidance that reflect good practice. The three principal sources of such good practice were: IAEA Safety Guide NS-G-1.1 (Software for Computer Based Systems Important to Safety in Nuclear Power Plants), European Commission consensus document (Common Position of European Nuclear Regulators for the Licensing of Safety Critical Software for Nuclear Reactors) and IEC nuclear sector standards such as IEC60880. A common understanding has been achieved between the NII and DNSR and regulatory guidance developed which will be used by both NII and DNSR in the assessment of computer-based safety systems and in the further development of more detailed joint technical assessment guidance for both regulatory organisations. (authors)« less
Flat-plate solar array project. Volume 6: Engineering sciences and reliability
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.; Smokler, M. I.
1986-01-01
The Flat-Plate Solar Array (FSA) Project activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety, and reliability requirements of large scale terrestrial photovoltaic systems applications are reported. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis of define design shortfalls and, thus, areas requiring additional research and development. A summary of the approach and technical outcome of these activities are provided along with a complete bibliography of the published documentation covering the detailed accomplishments and technologies developed.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Validation of the Dutch language version of the Safety Attitudes Questionnaire (SAQ-NL).
Haerkens, Marck Htm; van Leeuwen, Wouter; Sexton, J Bryan; Pickkers, Peter; van der Hoeven, Johannes G
2016-08-15
As the first objective of caring for patients is to do no harm, patient safety is a priority in delivering clinical care. An essential component of safe care in a clinical department is its safety climate. Safety climate correlates with safety-specific behaviour, injury rates, and accidents. Safety climate in healthcare can be assessed by the Safety Attitudes Questionnaire (SAQ), which provides insight by scoring six dimensions: Teamwork Climate, Job Satisfaction, Safety Climate, Stress Recognition, Working Conditions and Perceptions of Management. The objective of this study was to assess the psychometric properties of the Dutch language version of the SAQ in a variety of clinical departments in Dutch hospitals. The Dutch version (SAQ-NL) of the SAQ was back translated, and analyzed for semantic characteristics and content. From October 2010 to November 2015 SAQ-NL surveys were carried out in 17 departments in two university and seven large non-university teaching hospitals in the Netherlands, prior to a Crew Resource Management human factors intervention. Statistical analyses were used to examine response patterns, mean scores, correlations, internal consistency reliability and model fit. Cronbach's α's and inter-item correlations were calculated to examine internal consistency reliability. One thousand three hundred fourteen completed questionnaires were returned from 2113 administered to health care workers, resulting in a response rate of 62 %. Confirmatory Factor Analysis revealed the 6-factor structure fit the data adequately. Response patterns were similar for professional positions, departments, physicians and nurses, and university and non-university teaching hospitals. The SAQ-NL showed strong internal consistency (α = .87). Exploratory analysis revealed differences in scores on the SAQ dimensions when comparing different professional positions, when comparing physicians to nurses and when comparing university to non-university hospitals. The SAQ-NL demonstrated good psychometric properties and is therefore a useful instrument to measure patient safety climate in Dutch clinical work settings. As removal of one item resulted in an increased reliability of the Working Conditions dimension, revision or deletion of this item should be considered. The results from this study provide researchers and practitioners with insight into safety climate in a variety of departments and functional positions in Dutch hospitals.
Rosen, Daniel C; Nakash, Ora; Alegría, Margarita
2016-03-01
Advances in information technology within clinical practice have rapidly expanded over recent years. Despite the documented benefits of using electronic health records, which often necessitate computer use during the clinical encounter, little is known about the impact of computer use during the mental health visit and its effect on the quality of the therapeutic alliance. We investigated the association between computer use and quality of the working alliance and continuance in care in 104 naturalistic mental health intake sessions. Data were collected from 8 safety-net outpatient clinics in the Northeast offering mental health services to a diverse client population. All intakes were video recorded. Use of computer during the intake session was ascertained directly from the recording of the session (n = 22; 22.15% of intakes). Working alliance was assessed from the session videotapes by independent reliable coders, using the Working Alliance Inventory, Observer Form-bond scale. Therapist computer use was significantly associated with the quality of the observer-rated therapeutic alliance (Coefficient = -6.29, SE = 2.2, p < .01; Cohen's effect size of d = -0.76), and client's continuance in care (Odds ratio = .11, CI = 0.03-0.38; p < .001). The quality of the observer-rated working alliance and client's continuance in care were significantly lower in intakes in which the therapist used a computer during the session. Findings indicate a cautionary call in advancing computer use within the mental health intake, and demonstrate the need for future research to identify the specific behaviors that promote or hinder a strong working alliance within the context of psychotherapy in the technological era. (c) 2016 APA, all rights reserved).
Reliability techniques for computer executive programs
NASA Technical Reports Server (NTRS)
1972-01-01
Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.
Leszczynski, Dariusz; Xu, Zhengping
2010-01-27
There is ongoing discussion whether the mobile phone radiation causes any health effects. The International Commission on Non-Ionizing Radiation Protection, the International Committee on Electromagnetic Safety and the World Health Organization are assuring that there is no proven health risk and that the present safety limits protect all mobile phone users. However, based on the available scientific evidence, the situation is not as clear. The majority of the evidence comes from in vitro laboratory studies and is of very limited use for determining health risk. Animal toxicology studies are inadequate because it is not possible to "overdose" microwave radiation, as it is done with chemical agents, due to simultaneous induction of heating side-effects. There is a lack of human volunteer studies that would, in unbiased way, demonstrate whether human body responds at all to mobile phone radiation. Finally, the epidemiological evidence is insufficient due to, among others, selection and misclassification bias and the low sensitivity of this approach in detection of health risk within the population. This indicates that the presently available scientific evidence is insufficient to prove reliability of the current safety standards. Therefore, we recommend to use precaution when dealing with mobile phones and, whenever possible and feasible, to limit body exposure to this radiation. Continuation of the research on mobile phone radiation effects is needed in order to improve the basis and the reliability of the safety standards.
2010-01-01
There is ongoing discussion whether the mobile phone radiation causes any health effects. The International Commission on Non-Ionizing Radiation Protection, the International Committee on Electromagnetic Safety and the World Health Organization are assuring that there is no proven health risk and that the present safety limits protect all mobile phone users. However, based on the available scientific evidence, the situation is not as clear. The majority of the evidence comes from in vitro laboratory studies and is of very limited use for determining health risk. Animal toxicology studies are inadequate because it is not possible to "overdose" microwave radiation, as it is done with chemical agents, due to simultaneous induction of heating side-effects. There is a lack of human volunteer studies that would, in unbiased way, demonstrate whether human body responds at all to mobile phone radiation. Finally, the epidemiological evidence is insufficient due to, among others, selection and misclassification bias and the low sensitivity of this approach in detection of health risk within the population. This indicates that the presently available scientific evidence is insufficient to prove reliability of the current safety standards. Therefore, we recommend to use precaution when dealing with mobile phones and, whenever possible and feasible, to limit body exposure to this radiation. Continuation of the research on mobile phone radiation effects is needed in order to improve the basis and the reliability of the safety standards. PMID:20205835
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
On the reliability of computed chaotic solutions of non-linear differential equations
NASA Astrophysics Data System (ADS)
Liao, Shijun
2009-08-01
A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.
Ardalan, Ali; Sohrabizadeh, Sanaz
2016-02-25
Iran is placed among countries suffering from the highest number of earthquake casualties. Household preparedness, as one component of risk reduction efforts, is often supported in quake-prone areas. In Iran, lack of a valid and reliable household preparedness tool was reported by previous disaster studies. This study is aimed to fill this gap by developing a valid and reliable tool for assessing household preparedness in the event of an earthquake. This survey was conducted through three phases including literature review and focus group discussions with the participation of eight key informants, validity measurements and reliability measurements. Field investigation was completed with the participation of 450 households within three provinces of Iran. Content validity, construct validity, the use of factor analysis; internal consistency using Cronbach's alpha coefficient, and test-retest reliability were carried out to develop the tool. Based on the CVIs, ranging from 0.80 to 0.100, and exploratory factor analysis with factor loading of more than 0.5, all items were valid. The amount of Cronbach's alpha (0.7) and test-retest examination by Spearman correlations indicated that the scale was also reliable. The final instrument consisted of six categories and 18 questions including actions at the time of earthquakes, nonstructural safety, structural safety, hazard map, communications, drill, and safety skills. Using a Persian-version tool that is adjusted to the socio-cultural determinants and native language may result in more trustful information on earthquake preparedness. It is suggested that disaster managers and researchers apply this tool in their future household preparedness projects. Further research is needed to make effective policies and plans for transforming preparedness knowledge into behavior.
[Examination of safety improvement by failure record analysis that uses reliability engineering].
Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo
2010-08-20
How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.
Sensor Selection and Optimization for Health Assessment of Aerospace Systems
NASA Technical Reports Server (NTRS)
Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy
2007-01-01
Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.
Sensor Selection and Optimization for Health Assessment of Aerospace Systems
NASA Technical Reports Server (NTRS)
Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy
2008-01-01
Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.
NASA Technical Reports Server (NTRS)
Baldwin, Richard S.; Guzik, Monica; Skierski, Michael
2011-01-01
As NASA prepares for its next era of manned spaceflight missions, advanced energy storage technologies are being developed and evaluated to address future mission needs and technical requirements and to provide new mission-enabling technologies. Cell-level components for advanced lithium-ion batteries possessing higher energy, more reliable performance and enhanced, inherent safety characteristics are actively under development within the NASA infrastructure. A key component for safe and reliable cell performance is the cell separator, which separates the two energetic electrodes and functions to prevent the occurrence of an internal short-circuit while enabling ionic transport. Recently, a new generation of co-extruded separator films has been developed by ExxonMobil Chemical and introduced into their battery separator product portfolio. Several grades of this new separator material have been evaluated with respect to dynamic mechanical properties and safety-related performance attributes. This paper presents the results of these evaluations in comparison to a current state-ofthe-practice separator material. The results are discussed with respect to potential opportunities to enhance the inherent safety characteristics and reliability of future, advanced lithium-ion cell chemistries.
Reliability considerations for the total strain range version of strainrange partitioning
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y. T.
1984-01-01
A proposed total strainrange version of strainrange partitioning (SRP) to enhance the manner in which SRP is applied to life prediction is considered with emphasis on how advanced reliability technology can be applied to perform risk analysis and to derive safety check expressions. Uncertainties existing in the design factors associated with life prediction of a component which experiences the combined effects of creep and fatigue can be identified. Examples illustrate how reliability analyses of such a component can be performed when all design factors in the SRP model are random variables reflecting these uncertainties. The Rackwitz-Fiessler and Wu algorithms are used and estimates of the safety index and the probablity of failure are demonstrated for a SRP problem. Methods of analysis of creep-fatigue data with emphasis on procedures for producing synoptic statistics are presented. An attempt to demonstrate the importance of the contribution of the uncertainties associated with small sample sizes (fatique data) to risk estimates is discussed. The procedure for deriving a safety check expression for possible use in a design criteria document is presented.
Architecting Integrated System Health Management for Airworthiness
2013-09-01
aircraft safety and reliability through condition-based maintenance [Miller et al., 1991]. With the same motivation, Integrated System Health Management...diagnostics and prognostics algorithms. 2.2.2 Health and Usage Monitoring System (HUMS) in Helicopters Increased demand for improved operational safety ...offshore shuttle helicopters traversing the petrol installations in the North Sea, and increased demand for improved operational safety and reduced
Confinement of Radioactive Materials at Defense Nuclear Facilities
2004-10-01
The design of defense nuclear facilities includes systems whose reliable operation is vital to the protection of the public, workers, and the...final safety-class barrier to the release of hazardous materials with potentially serious public consequences. The Defense Nuclear Facilities Safety...the public at certain defense nuclear facilities . This change has resulted in downgrading of the functional safety classification of confinement
Sociotechnical attributes of safe and unsafe work systems.
Kleiner, Brian M; Hettinger, Lawrence J; DeJoy, David M; Huang, Yuang-Hsiang; Love, Peter E D
2015-01-01
Theoretical and practical approaches to safety based on sociotechnical systems principles place heavy emphasis on the intersections between social-organisational and technical-work process factors. Within this perspective, work system design emphasises factors such as the joint optimisation of social and technical processes, a focus on reliable human-system performance and safety metrics as design and analysis criteria, the maintenance of a realistic and consistent set of safety objectives and policies, and regular access to the expertise and input of workers. We discuss three current approaches to the analysis and design of complex sociotechnical systems: human-systems integration, macroergonomics and safety climate. Each approach emphasises key sociotechnical systems themes, and each prescribes a more holistic perspective on work systems than do traditional theories and methods. We contrast these perspectives with historical precedents such as system safety and traditional human factors and ergonomics, and describe potential future directions for their application in research and practice. The identification of factors that can reliably distinguish between safe and unsafe work systems is an important concern for ergonomists and other safety professionals. This paper presents a variety of sociotechnical systems perspectives on intersections between social--organisational and technology--work process factors as they impact work system analysis, design and operation.
NASA Technical Reports Server (NTRS)
Quintana, Rolando
2003-01-01
The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.
Feasibility Study on Cutting HTPB Propellants with Abrasive Water Jet
NASA Astrophysics Data System (ADS)
Jiang, Dayong; Bai, Yun
2018-01-01
Abrasive water jet is used to carry out the experiment research on cutting HTPB propellants with three components, which will provide technical support for the engineering treatment of waste rocket motor. Based on the reliability theory and related scientific research results, the safety and efficiency of cutting sensitive HTPB propellants by abrasive water jet were experimentally studied. The results show that the safety reliability is not less than 99.52% at 90% confidence level, so the safety is adequately ensured. The cooling and anti-friction effect of high-speed water jet is the decisive factor to suppress the detonation of HTPB propellant. Compared with pure water jet, cutting efficiency was increased by 5% - 87%. The study shows that abrasive water jets meet the practical use for cutting HTPB propellants.
Wallen, Erik S; Mulloy, Karen B
2006-10-01
Occupational diseases are a significant problem affecting public health. Safety training is an important method of preventing occupational illness. Training is increasingly being delivered by computer although theories of learning from computer-based multimedia have been tested almost entirely on college students. This study was designed to determine whether these theories might also be applied to safety training applications for working adults. Participants viewed either computer-based multimedia respirator use training with concurrent narration, narration prior to the animation, or unrelated safety training. Participants then took a five-item transfer test which measured their ability to use their knowledge in new and creative ways. Participants who viewed the computer-based multimedia trainings both did significantly better than the control group on the transfer test. The results of this pilot study suggest that design guidelines developed for younger learners may be effective for training workers in occupational safety and health although more investigation is needed.
Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.
2013-01-01
While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faydide, B.
1997-07-01
This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained withmore » Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.« less
Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks
2017-01-01
Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle’s safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given. PMID:29231882
Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks.
Song, Caixia
2017-12-12
Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle's safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Definition and trade-off study of reconfigurable airborne digital computer system organizations
NASA Technical Reports Server (NTRS)
Conn, R. B.
1974-01-01
A highly-reliable, fault-tolerant reconfigurable computer system for aircraft applications was developed. The development and application reliability and fault-tolerance assessment techniques are described. Particular emphasis is placed on the needs of an all-digital, fly-by-wire control system appropriate for a passenger-carrying airplane.
7 CFR 1788.2 - General insurance requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... consistent with cost-effectiveness, reliability, safety, and expedition. It is recognized that Prudent... accomplish the desired result at the lowest reasonable cost consistent with cost-effectiveness, reliability... which is used or useful in the borrower's business and which shall be covered by insurance, unless each...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.M.; Boccio, J.L.; Karimian, S.
1986-01-01
In this paper, a trial application of reliability technology to the emergency diesel generator system at the Trojan Nuclear Power Plant is presented. An approach for formulating a reliability program plan for this system is being developed. The trial application has shown that a reliability program process, using risk- and reliability-based techniques, can be interwoven into current plant operational activities to help in controlling, analyzing, and predicting faults that can challenge safety systems. With the cooperation of the utility, Portland General Electric Co., this reliability program can eventually be implemented at Trojan to track its effectiveness.
NASA Technical Reports Server (NTRS)
Leveson, Nancy
1987-01-01
Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.
Nuclear Warheads: The Reliable Replacement Warhead Program and the Life Extension Program
2007-07-16
The Defense Nuclear Facilities Safety Board was created by Congress 1988 “as an independent oversight organization within the Executive Branch charged... nuclear facilities .” U.S. Defense Nuclear Facilities Safety Board. “Who We Are,” at [http://www.dnfsb.gov/about/index.html]. beginning, addressed safety...approach, if successful, would “reduce or eliminate the need for ESD controls.”55 Kent Fortenberry, Technical Director of the Defense Nuclear Facilities Safety
[A Medical Devices Management Information System Supporting Full Life-Cycle Process Management].
Tang, Guoping; Hu, Liang
2015-07-01
Medical equipments are essential supplies to carry out medical work. How to ensure the safety and reliability of the medical equipments in diagnosis, and reduce procurement and maintenance costs is a topic of concern to everyone. In this paper, product lifecycle management (PLM) and enterprise resource planning (ERP) are cited to establish a lifecycle management information system. Through integrative and analysis of the various stages of the relevant data in life-cycle, it can ensure safety and reliability of medical equipments in the operation and provide the convincing data for meticulous management.
Reliability and safety, and the risk of construction damage in mining areas
NASA Astrophysics Data System (ADS)
Skrzypczak, Izabela; Kogut, Janusz P.; Kokoszka, Wanda; Oleniacz, Grzegorz
2018-04-01
This article concerns the reliability and safety of building structures in mining areas, with a particular emphasis on the quantitative risk analysis of buildings. The issues of threat assessment and risk estimation, in the design of facilities in mining exploitation areas, are presented here, indicating the difficulties and ambiguities associated with their quantification and quantitative analysis. This article presents the concept of quantitative risk assessment of the impact of mining exploitation, in accordance with ISO 13824 [1]. The risk analysis is illustrated through an example of a construction located within an area affected by mining exploitation.
Autonomous Control of Space Nuclear Reactors
NASA Technical Reports Server (NTRS)
Merk, John
2013-01-01
Nuclear reactors to support future robotic and manned missions impose new and innovative technological requirements for their control and protection instrumentation. Long-duration surface missions necessitate reliable autonomous operation, and manned missions impose added requirements for failsafe reactor protection. There is a need for an advanced instrumentation and control system for space-nuclear reactors that addresses both aspects of autonomous operation and safety. The Reactor Instrumentation and Control System (RICS) consists of two functionally independent systems: the Reactor Protection System (RPS) and the Supervision and Control System (SCS). Through these two systems, the RICS both supervises and controls a nuclear reactor during normal operational states, as well as monitors the operation of the reactor and, upon sensing a system anomaly, automatically takes the appropriate actions to prevent an unsafe or potentially unsafe condition from occurring. The RPS encompasses all electrical and mechanical devices and circuitry, from sensors to actuation device output terminals. The SCS contains a comprehensive data acquisition system to measure continuously different groups of variables consisting of primary measurement elements, transmitters, or conditioning modules. These reactor control variables can be categorized into two groups: those directly related to the behavior of the core (known as nuclear variables) and those related to secondary systems (known as process variables). Reliable closed-loop reactor control is achieved by processing the acquired variables and actuating the appropriate device drivers to maintain the reactor in a safe operating state. The SCS must prevent a deviation from the reactor nominal conditions by managing limitation functions in order to avoid RPS actions. The RICS has four identical redundancies that comply with physical separation, electrical isolation, and functional independence. This architecture complies with the safety requirements of a nuclear reactor and provides high availability to the host system. The RICS is intended to interface with a host computer (the computer of the spacecraft where the reactor is mounted). The RICS leverages the safety features inherent in Earth-based reactors and also integrates the wide range neutron detector (WRND). A neutron detector provides the input that allows the RICS to do its job. The RICS is based on proven technology currently in use at a nuclear research facility. In its most basic form, the RICS is a ruggedized, compact data-acquisition and control system that could be adapted to support a wide variety of harsh environments. As such, the RICS could be a useful instrument outside the scope of a nuclear reactor, including military applications where failsafe data acquisition and control is required with stringent size, weight, and power constraints.
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco
2005-03-01
GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.
Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P
2001-02-01
Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.
Autonomous system for launch vehicle range safety
NASA Astrophysics Data System (ADS)
Ferrell, Bob; Haley, Sam
2001-02-01
The Autonomous Flight Safety System (AFSS) is a launch vehicle subsystem whose ultimate goal is an autonomous capability to assure range safety (people and valuable resources), flight personnel safety, flight assets safety (recovery of valuable vehicles and cargo), and global coverage with a dramatic simplification of range infrastructure. The AFSS is capable of determining current vehicle position and predicting the impact point with respect to flight restriction zones. Additionally, it is able to discern whether or not the launch vehicle is an immediate threat to public safety, and initiate the appropriate range safety response. These features provide for a dramatic cost reduction in range operations and improved reliability of mission success. .
Vincent, Mary Anne; Sheriff, Susan; Mellott, Susan
2015-02-01
High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommended that it be used to improve patient safety in 2000. However, there is limited research on the effect of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students being evaluated by experts using reliable and valid appraisal instruments. The purpose of this integrative review and meta-analysis is to explore what researchers have established about the impact of high-fidelity simulation on improving the psychomotor clinical performance of undergraduate nursing students. Only eight of the 1120 references met inclusion criteria. A meta-analysis using Hedges' g to compute the effect size and direction of impact yielded a range of -0.26 to +3.39. A positive effect was shown in seven of eight studies; however, there were five different research designs and six unique appraisal instruments used among these studies. More research is necessary to determine if high-fidelity simulation improves psychomotor clinical performance in undergraduate nursing students. Nursing programs from multiple sites having a standardized curriculum and using the same appraisal instruments with established reliability and validity are ideal for this work.
Rear-end vision-based collision detection system for motorcyclists
NASA Astrophysics Data System (ADS)
Muzammel, Muhammad; Yusoff, Mohd Zuki; Meriaudeau, Fabrice
2017-05-01
In many countries, the motorcyclist fatality rate is much higher than that of other vehicle drivers. Among many other factors, motorcycle rear-end collisions are also contributing to these biker fatalities. To increase the safety of motorcyclists and minimize their road fatalities, this paper introduces a vision-based rear-end collision detection system. The binary road detection scheme contributes significantly to reduce the negative false detections and helps to achieve reliable results even though shadows and different lane markers are present on the road. The methodology is based on Harris corner detection and Hough transform. To validate this methodology, two types of dataset are used: (1) self-recorded datasets (obtained by placing a camera at the rear end of a motorcycle) and (2) online datasets (recorded by placing a camera at the front of a car). This method achieved 95.1% accuracy for the self-recorded dataset and gives reliable results for the rear-end vehicle detections under different road scenarios. This technique also performs better for the online car datasets. The proposed technique's high detection accuracy using a monocular vision camera coupled with its low computational complexity makes it a suitable candidate for a motorbike rear-end collision detection system.
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Medeiros, Lydia C; Hillers, Virginia N; Chen, Gang; Bergmann, Verna; Kendall, Patricia; Schroeder, Mary
2004-11-01
The objective of this study was to design and develop food safety knowledge and attitude scales based on food-handling guidelines developed by a national panel of food safety experts. Knowledge (n=43) and attitude (n=49) questions were developed and pilot-tested with a variety of consumer groups. Final questions were selected based on item analysis and on validity and reliability statistical tests. Knowledge questions were tested in Washington State with participants in low-income nutrition education programs (pretest/posttest n=58, test/retest n=19) and college students (pretest/posttest n=34). Attitude questions were tested in Ohio with nutrition education program participants (n=30) and college students (non-nutrition majors n=138, nutrition majors n=57). Item analysis, paired sample t tests, Pearson's correlation coefficients, and Cronbach's alpha were used. Reliability and validity tests of individual items and the question sets were used to reduce the scales to 18 knowledge questions and 10 attitude questions. The knowledge and attitude scales covered topics ranked as important by a national panel of experts and met most validity and reliability standards. The 18-item knowledge questionnaire had instructional sensitivity (mean score increase of more than three points after instruction), internal reliability (Cronbach's alpha >.75), and produced similar results in test-retest without intervention (coefficient of stability=.81). Knowledge of correct procedures for hand washing and avoiding cross-contamination was widespread before instruction. Knowledge was limited regarding avoiding food preparation while ill, cooking hamburgers, high-risk foods, and whether cooked rice and potatoes could be stored at room temperature. The 10-item attitude scale had an appropriate range of responses (item difficulty) and produced similar results in test-retest ( P =.01). Internal consistency ranged from alpha=.63 to .89. Students anticipating a career where food safety is valued had higher attitude scale scores than participants of extension education programs. Uses for the knowledge questionnaire include assessment of subject matter knowledge before instruction and knowledge gain after instruction. The attitude scale assesses an outcome variable that may predict food safety behavior.
Inter-rater reliability of an observation-based ergonomics assessment checklist for office workers.
Pereira, Michelle Jessica; Straker, Leon Melville; Comans, Tracy Anne; Johnston, Venerina
2016-12-01
To establish the inter-rater reliability of an observation-based ergonomics assessment checklist for computer workers. A 37-item (38-item if a laptop was part of the workstation) comprehensive observational ergonomics assessment checklist comparable to government guidelines and up to date with empirical evidence was developed. Two trained practitioners assessed full-time office workers performing their usual computer-based work and evaluated the suitability of workstations used. Practitioners assessed each participant consecutively. The order of assessors was randomised, and the second assessor was blinded to the findings of the first. Unadjusted kappa coefficients between the raters were obtained for the overall checklist and subsections that were formed from question-items relevant to specific workstation equipment. Twenty-seven office workers were recruited. The inter-rater reliability between two trained practitioners achieved moderate to good reliability for all except one checklist component. This checklist has mostly moderate to good reliability between two trained practitioners. Practitioner Summary: This reliable ergonomics assessment checklist for computer workers was designed using accessible government guidelines and supplemented with up-to-date evidence. Employers in Queensland (Australia) can fulfil legislative requirements by using this reliable checklist to identify and subsequently address potential risk factors for work-related injury to provide a safe working environment.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Communication devices in the operating room.
Ruskin, Keith J
2006-12-01
Effective communication is essential to patient safety. Although radio pagers have been the cornerstone of medical communication, new devices such as cellular telephones, personal digital assistants (PDAs), and laptop or tablet computers can help anesthesiologists to get information quickly and reliably. Anesthesiologists can use these devices to speak with colleagues, access the medical record, or help a colleague in another location without having to leave a patient's side. Recent advances in communication technology offer anesthesiologists new ways to improve patient care. Anesthesiologists rely on a wide variety of information to make decisions, including vital signs, laboratory values, and entries in the medical record. Devices such as PDAs and computers with wireless networking can be used to access this information. Mobile telephones can be used to get help or ask for advice, and are more efficient than radio pagers. Voice over Internet protocol is a new technology that allows voice conversations to be routed over computer networks. It is widely believed that wireless devices can cause life-threatening interference with medical devices. The actual risk is very low, and is offset by a significant reduction in medical errors that results from more efficient communication. Using common technology like cellular telephones and wireless networks is a simple, cost-effective way to improve patient care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikkel, Daniel J.; Meisner, Robert
The Advanced Simulation and Computing Campaign, herein referred to as the ASC Program, is a core element of the science-based Stockpile Stewardship Program (SSP), which enables assessment, certification, and maintenance of the safety, security, and reliability of the U.S. nuclear stockpile without the need to resume nuclear testing. The use of advanced parallel computing has transitioned from proof-of-principle to become a critical element for assessing and certifying the stockpile. As the initiative phase of the ASC Program came to an end in the mid-2000s, the National Nuclear Security Administration redirected resources to other urgent priorities, and resulting staff reductions inmore » ASC occurred without the benefit of analysis of the impact on modern stockpile stewardship that is dependent on these new simulation capabilities. Consequently, in mid-2008 the ASC Program management commissioned a study to estimate the essential size and balance needed to sustain advanced simulation as a core component of stockpile stewardship. The ASC Program requires a minimum base staff size of 930 (which includes the number of staff necessary to maintain critical technical disciplines as well as to execute required programmatic tasks) to sustain its essential ongoing role in stockpile stewardship.« less
76 FR 3604 - Information Collection; Qualified Products List for Engine Driven Pumps
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... levels. 2. Reliability and endurance requirements. These requirements include a 100-hour endurance test... evaluated to meet specific requirements related to safety, effectiveness, efficiency, and reliability of the... of the collection of information, including the validity of the methodology and assumptions used; (3...
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.
21 CFR 814.82 - Postapproval requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... periodic reporting on the safety, effectiveness, and reliability of the device for its intended use. FDA... a device and in the advertising of any restricted device of warnings, hazards, or precautions... extent required for the medical welfare of the individual, to determine the safety or effectiveness of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Testing. 62.30-10 Section 62.30-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... override safety trip control systems. This equipment must indicate when it is active. ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Testing. 62.30-10 Section 62.30-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... override safety trip control systems. This equipment must indicate when it is active. ...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Testing. 62.30-10 Section 62.30-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... override safety trip control systems. This equipment must indicate when it is active. ...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Testing. 62.30-10 Section 62.30-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... override safety trip control systems. This equipment must indicate when it is active. ...
46 CFR 62.30-5 - Independence.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...