Sample records for ultra-reliable computer systems

  1. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  2. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  3. Ultra Reliability Workshop Introduction

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2006-01-01

    This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

  4. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  5. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  6. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  7. Reliability of Computer Systems ODRA 1305 and R-32,

    DTIC Science & Technology

    1983-03-25

    RELIABILITY OF COMPUTER SYSTEMS ODRA 1305 AND R-32 By: Wit Drewniak English pages: 12 Source: Informatyka , Vol. 14, Nr. 7, 1979, pp. 5-8 Country of...JS EMC computers installed in ZETO, Katowice", Informatyka , No. 7-8/78, deals with various reliability classes * within the family of the machines of

  8. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  9. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  10. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  11. Computer calculation of device, circuit, equipment, and system reliability.

    NASA Technical Reports Server (NTRS)

    Crosby, D. R.

    1972-01-01

    A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.

  12. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  13. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  14. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  15. Ultra-short heart rate variability recording reliability: The effect of controlled paced breathing.

    PubMed

    Melo, Hiago M; Martins, Thiago C; Nascimento, Lucas M; Hoeller, Alexandre A; Walz, Roger; Takase, Emílio

    2018-06-04

    Recent studies have reported that Heart Rate Variability (HRV) indices remain reliable even during recordings shorter than 5 min, suggesting the ultra-short recording method as a valuable tool for autonomic assessment. However, the minimum time-epoch to obtain a reliable record for all HRV domains (time, frequency, and Poincare geometric measures), as well as the effect of respiratory rate on the reliability of these indices remains unknown. Twenty volunteers had their HRV recorded in a seated position during spontaneous and controlled respiratory rhythms. HRV intervals with 1, 2, and 3 min were correlated with the gold standard period (6-min duration) and the mean values of all indices were compared in the two respiratory rhythm conditions. rMSSD and SD1 were more reliable for recordings with ultra-short duration at all time intervals (r values from 0.764 to 0.950, p < 0.05) for spontaneous breathing condition, whereas the other indices require longer recording time to obtain reliable values. The controlled breathing rhythm evokes stronger r values for time domain indices (r values from 0.83 to 0.99, p < 0.05 for rMSSD), but impairs the mean values replicability of domains across most time intervals. Although the use of standardized breathing increases the correlations coefficients, all HRV indices showed an increase in mean values (t values from 3.79 to 14.94, p < 0.001) except the RR and HF that presented a decrease (t = 4.14 and 5.96, p < 0.0001). Our results indicate that proper ultra-short-term recording method can provide a quick and reliable source of cardiac autonomic nervous system assessment. © 2018 Wiley Periodicals, Inc.

  16. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  17. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  18. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  19. Fog-computing concept usage as means to enhance information and control system reliability

    NASA Astrophysics Data System (ADS)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-05-01

    This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.

  20. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  1. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  2. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (Inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  3. High-reliability computing for the smarter planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  4. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-07-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human-robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as αf + ξf and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, αf + ξf has a non-linear dependence on temperature and varies from 6.0 × 10-6 °C-1 (20 °C) to 10.6 × 10-6 °C-1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C.

  5. UltraForm Finishing (UFF) a 5-axis computer controlled precision optical component grinding and polishing system

    NASA Astrophysics Data System (ADS)

    Bechtold, Michael; Mohring, David; Fess, Edward

    2007-05-01

    OptiPro Systems has developed a new finishing process for the manufacturing of precision optical components. UltraForm Finishing (UFF) has evolved from a tire shaped tool with polishing material on its periphery, to its newest design, which incorporates a precision rubber wheel wrapped with a band of polishing material passing over it. Through our research we have developed a user friendly graphical interface giving the optician a deterministic path for finishing precision optical components. Complex UFF Algorithms combine the removal function and desired depth of removal into a motion controlled tool path which minimizes surface roughness and form errors. The UFF process includes 5 axes of computer controlled motion, (3 linear and 2 rotary) which provide the flexibility for finishing a variety of shapes including spheres, aspheres, and freeform optics. The long arm extension, along with a range of diameters for the "UltraWheel" provides a unique solution for the finishing of steep concave shapes such as ogives and domes. The UltraForm process utilizes, fixed and loose abrasives, in combination with our proprietary "UltraBelts" made of a range of materials such as polyurethane, felt, resin, diamond and others.

  6. System reliability approaches for advanced propulsion system structures

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Mahadevan, S.

    1991-01-01

    This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.

  7. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  8. Reliability Evaluation of Computer Systems

    DTIC Science & Technology

    1979-04-01

    detection mechanisms. The model rrvided values for the system availa bility, mean time before failure (VITBF) , and the proportion of time that the 4 system...Stanford University Comm~iuter Science 311, (also Electrical Engineering 482), Advanced Computer Organization. Graduate course in computer architeture

  9. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  10. Intraday and Interday Reliability of Ultra-Short-Term Heart Rate Variability in Rugby Union Players.

    PubMed

    Nakamura, Fábio Y; Pereira, Lucas A; Esco, Michael R; Flatt, Andrew A; Moraes, José E; Cal Abad, Cesar C; Loturco, Irineu

    2017-02-01

    Nakamura, FY, Pereira, LA, Esco, MR, Flatt, AA, Moraes, JE, Cal Abad, CC, and Loturco, I. Intraday and interday reliability of ultra-short-term heart rate variability in rugby union players. J Strength Cond Res 31(2): 548-551, 2017-The aim of this study was to examine the intraday and interday reliability of ultra-short-term vagal-related heart rate variability (HRV) in elite rugby union players. Forty players from the Brazilian National Rugby Team volunteered to participate in this study. The natural log of the root mean square of successive RR interval differences (lnRMSSD) assessments were performed on 4 different days. The HRV was assessed twice (intraday reliability) on the first day and once per day on the following 3 days (interday reliability). The RR interval recordings were obtained from 2-minute recordings using a portable heart rate monitor. The relative reliability of intraday and interday lnRMSSD measures was analyzed using the intraclass correlation coefficient (ICC). The typical error of measurement (absolute reliability) of intraday and interday lnRMSSD assessments was analyzed using the coefficient of variation (CV). Both intraday (ICC = 0.96; CV = 3.99%) and interday (ICC = 0.90; CV = 7.65%) measures were highly reliable. The ultra-short-term lnRMSSD is a consistent measure for evaluating elite rugby union players, in both intraday and interday settings. This study provides further validity to using this shortened method in practical field conditions with highly trained team sports athletes.

  11. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  12. A forward view on reliable computers for flight control

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Wensley, J. H.

    1976-01-01

    The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.

  13. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  14. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  15. Integrated computational study of ultra-high heat flux cooling using cryogenic micro-solid nitrogen spray

    NASA Astrophysics Data System (ADS)

    Ishimoto, Jun; Oh, U.; Tan, Daisuke

    2012-10-01

    A new type of ultra-high heat flux cooling system using the atomized spray of cryogenic micro-solid nitrogen (SN2) particles produced by a superadiabatic two-fluid nozzle was developed and numerically investigated for application to next generation super computer processor thermal management. The fundamental characteristics of heat transfer and cooling performance of micro-solid nitrogen particulate spray impinging on a heated substrate were numerically investigated and experimentally measured by a new type of integrated computational-experimental technique. The employed Computational Fluid Dynamics (CFD) analysis based on the Euler-Lagrange model is focused on the cryogenic spray behavior of atomized particulate micro-solid nitrogen and also on its ultra-high heat flux cooling characteristics. Based on the numerically predicted performance, a new type of cryogenic spray cooling technique for application to a ultra-high heat power density device was developed. In the present integrated computation, it is clarified that the cryogenic micro-solid spray cooling characteristics are affected by several factors of the heat transfer process of micro-solid spray which impinges on heated surface as well as by atomization behavior of micro-solid particles. When micro-SN2 spraying cooling was used, an ultra-high cooling heat flux level was achieved during operation, a better cooling performance than that with liquid nitrogen (LN2) spray cooling. As micro-SN2 cooling has the advantage of direct latent heat transport which avoids the film boiling state, the ultra-short time scale heat transfer in a thin boundary layer is more possible than in LN2 spray. The present numerical prediction of the micro-SN2 spray cooling heat flux profile can reasonably reproduce the measurement results of cooling wall heat flux profiles. The application of micro-solid spray as a refrigerant for next generation computer processors is anticipated, and its ultra-high heat flux technology is expected

  16. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  17. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  18. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  19. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  20. Reliability techniques for computer executive programs

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.

  1. Reliable computation from contextual correlations

    NASA Astrophysics Data System (ADS)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  2. Ultra Compact Optical Pickup with Integrated Optical System

    NASA Astrophysics Data System (ADS)

    Nakata, Hideki; Nagata, Takayuki; Tomita, Hironori

    2006-08-01

    Smaller and thinner optical pickups are needed for portable audio-visual (AV) products and notebook personal computers (PCs). We have newly developed an ultra compact recordable optical pickup for Mini Disc (MD) that measures less than 4 mm from the disc surface to the bottom of the optical pickup, making the optical system markedly compact. We have integrated all the optical components into an objective lens actuator moving unit, while fully satisfying recording and playback performance requirements. In this paper, we propose an ultra compact optical pickup applicable to portable MD recorders.

  3. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  4. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers

  5. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  6. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  7. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    NASA Technical Reports Server (NTRS)

    Migneault, Gerard E.

    1987-01-01

    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  8. Coexistence of enhanced mobile broadband communications and ultra-reliable low-latency communications in mobile front-haul

    NASA Astrophysics Data System (ADS)

    Ying, Kai; Kowalski, John M.; Nogami, Toshizo; Yin, Zhanping; Sheng, Jia

    2018-01-01

    5G systems are supposed to support coexistence of multiple services such as ultra reliable low latency communications (URLLC) and enhanced mobile broadband (eMBB) communications. The target of eMBB communications is to meet the high-throughput requirement while URLLC are used for some high priority services. Due to the sporadic nature and low latency requirement, URLLC transmission may pre-empt the resource of eMBB transmission. Our work is to analyze the URLLC impact on eMBB transmission in mobile front-haul. Then, some solutions are proposed to guarantee the reliability/latency requirements for URLLC services and minimize the impact to eMBB services at the same time.

  9. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by

  10. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  11. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  12. Validation of radiative transfer computation with Monte Carlo method for ultra-relativistic background flow

    NASA Astrophysics Data System (ADS)

    Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi

    2017-11-01

    We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.

  13. Design Strategy for a Formally Verified Reliable Computing Platform

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.

    1991-01-01

    This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.

  14. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  15. Fault-tolerant building-block computer study

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.

  16. Temporal reliability of ultra-high field resting-state MRI for single-subject sensorimotor and language mapping.

    PubMed

    Branco, Paulo; Seixas, Daniela; Castro, São Luís

    2018-03-01

    Resting-state fMRI is a well-suited technique to map functional networks in the brain because unlike task-based approaches it requires little collaboration from subjects. This is especially relevant in clinical settings where a number of subjects cannot comply with task demands. Previous studies using conventional scanner fields have shown that resting-state fMRI is able to map functional networks in single subjects, albeit with moderate temporal reliability. Ultra-high resolution (7T) imaging provides higher signal-to-noise ratio and better spatial resolution and is thus well suited to assess the temporal reliability of mapping results, and to determine if resting-state fMRI can be applied in clinical decision making including preoperative planning. We used resting-state fMRI at ultra-high resolution to examine whether the sensorimotor and language networks are reliable over time - same session and one week after. Resting-state networks were identified for all subjects and sessions with good accuracy. Both networks were well delimited within classical regions of interest. Mapping was temporally reliable at short and medium time-scales as demonstrated by high values of overlap in the same session and one week after for both networks. Results were stable independently of data quality metrics and physiological variables. Taken together, these findings provide strong support for the suitability of ultra-high field resting-state fMRI mapping at the single-subject level. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Impact of coverage on the reliability of a fault tolerant computer

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1975-01-01

    A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.

  18. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  19. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  20. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  1. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  2. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  3. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  4. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  5. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  6. Design and Analysis of a Flexible, Reliable Deep Space Life Support System

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.

  7. Ultra-Scale Computing for Emergency Evacuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng

    2010-01-01

    Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less

  8. TOPICAL REVIEW: Ultra-thin film encapsulation processes for micro-electro-mechanical devices and systems

    NASA Astrophysics Data System (ADS)

    Stoldt, Conrad R.; Bright, Victor M.

    2006-05-01

    A range of physical properties can be achieved in micro-electro-mechanical systems (MEMS) through their encapsulation with solid-state, ultra-thin coatings. This paper reviews the application of single source chemical vapour deposition and atomic layer deposition (ALD) in the growth of submicron films on polycrystalline silicon microstructures for the improvement of microscale reliability and performance. In particular, microstructure encapsulation with silicon carbide, tungsten, alumina and alumina-zinc oxide alloy ultra-thin films is highlighted, and the mechanical, electrical, tribological and chemical impact of these overlayers is detailed. The potential use of solid-state, ultra-thin coatings in commercial microsystems is explored using radio frequency MEMS as a case study for the ALD alloy alumina-zinc oxide thin film.

  9. System and method for magnetic current density imaging at ultra low magnetic fields

    DOEpatents

    Espy, Michelle A.; George, John Stevens; Kraus, Robert Henry; Magnelind, Per; Matlashov, Andrei Nikolaevich; Tucker, Don; Turovets, Sergei; Volegov, Petr Lvovich

    2016-02-09

    Preferred systems can include an electrical impedance tomography apparatus electrically connectable to an object; an ultra low field magnetic resonance imaging apparatus including a plurality of field directions and disposable about the object; a controller connected to the ultra low field magnetic resonance imaging apparatus and configured to implement a sequencing of one or more ultra low magnetic fields substantially along one or more of the plurality of field directions; and a display connected to the controller, and wherein the controller is further configured to reconstruct a displayable image of an electrical current density in the object. Preferred methods, apparatuses, and computer program products are also disclosed.

  10. Reliability of ultra-thin insulation coatings for long-term electrophysiological recordings

    NASA Astrophysics Data System (ADS)

    Hooker, S. A.

    2006-03-01

    Improved measurement of neural signals is needed for research into Alzheimer's, Parkinson's, epilepsy, strokes, and spinal cord injuries. At the heart of such instruments are microelectrodes that measure electrical signals in the body. Such electrodes must be small, stable, biocompatible, and robust. However, it is also important that they be easily implanted without causing substantial damage to surrounding tissue. Tissue damage can lead to the generation of immune responses that can interfere with the electrical measurement, preventing long-term recording. Recent advances in microfabrication and nanotechnology afford the opportunity to dramatically reduce the physical dimensions of recording electrodes, thereby minimizing insertion damage. However, one potential cause for concern is the reliability of the insulating coatings, applied to these ultra-fine-diameter wires to precisely control impedance. Such coatings are often polymeric and are applied everywhere but the sharpened tips of the wires, resulting in nominal impedances between 0.5 MOhms and 2.0 MOhms. However, during operation, the polymer degrades, changing the exposed area and the impedance. In this work, ultra-thin ceramic coatings were deposited as an alternative to polymer coatings. Processing conditions were varied to determine the effect of microstructure on measurement stability during two-electrode measurements in a standard buffer solution. Coatings were applied to seven different metals to determine any differences in performance due to the surface characteristics of the underlying wire. Sintering temperature and wire type had significant effects on coating degradation. Dielectric breakdown was also observed at relatively low voltages, indicating that test conditions must be carefully controlled to maximize reliability.

  11. Quantifying Digital Ulcers in Systemic Sclerosis: Reliability of Computer-Assisted Planimetry in Measuring Lesion Size.

    PubMed

    Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G

    2018-03-01

    Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.

  12. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  13. Multi-hop routing mechanism for reliable sensor computing.

    PubMed

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min

    2009-01-01

    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.

  14. A novel and reliable computational intelligence system for breast cancer detection.

    PubMed

    Zadeh Shirazi, Amin; Seyyed Mahdavi Chabok, Seyyed Javad; Mohammadi, Zahra

    2018-05-01

    Cancer is the second important morbidity and mortality factor among women and the most incident type is breast cancer. This paper suggests a hybrid computational intelligence model based on unsupervised and supervised learning techniques, i.e., self-organizing map (SOM) and complex-valued neural network (CVNN), for reliable detection of breast cancer. The dataset used in this paper consists of 822 patients with five features (patient's breast mass shape, margin, density, patient's age, and Breast Imaging Reporting and Data System assessment). The proposed model was used for the first time and can be categorized in two stages. In the first stage, considering the input features, SOM technique was used to cluster the patients with the most similarity. Then, in the second stage, for each cluster, the patient's features were applied to complex-valued neural network and dealt with to classify breast cancer severity (benign or malign). The obtained results corresponding to each patient were compared to the medical diagnosis results using receiver operating characteristic analyses and confusion matrix. In the testing phase, health and disease detection ratios were 94 and 95%, respectively. Accordingly, the superiority of the proposed model was proved and can be used for reliable and robust detection of breast cancer.

  15. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  16. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  17. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  18. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  20. A low-cost, ultra-fast and ultra-low noise preamplifier for silicon avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Gasmi, Khaled

    2018-02-01

    An ultra-fast and ultra-low noise preamplifier for amplifying the fast and weak electrical signals generated by silicon avalanche photodiodes has been designed and developed. It is characterized by its simplicity, compactness, reliability and low cost of construction. A very wide bandwidth of 300 MHz, a very good linearity from 1 kHz to 280 MHz, an ultra-low noise level at the input of only 1.7 nV Hz-1/2 and a very good stability are its key features. The compact size (70 mm  ×  90 mm) and light weight (45 g), as well as its excellent characteristics, make this preamplifier very competitive compared to any commercial preamplifier. The preamplifier, which is a main part of the detection system of a homemade laser remote sensing system, has been successfully tested. In addition, it is versatile and can be used in any optical detection system requiring high speed and very low noise electronics.

  1. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  2. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  3. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  4. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  5. Achieving reliability - The evolution of redundancy in American manned spacecraft computers

    NASA Technical Reports Server (NTRS)

    Tomayko, J. E.

    1985-01-01

    The Shuttle is the first launch system deployed by NASA with full redundancy in the on-board computer systems. Fault-tolerance, i.e., restoring to a backup with less capabilities, was the method selected for Apollo. The Gemini capsule was the first to carry a computer, which also served as backup for Titan launch vehicle guidance. Failure of the Gemini computer resulted in manual control of the spacecraft. The Apollo system served vehicle flight control and navigation functions. The redundant computer on Skylab provided attitude control only in support of solar telescope pointing. The STS digital, fly-by-wire avionics system requires 100 percent reliability. The Orbiter carries five general purpose computers, four being fully-redundant and the fifth being soley an ascent-descent tool. The computers are synchronized at input and output points at a rate of about six times a second. The system is projected to cause a loss of an Orbiter only four times in a billion flights.

  6. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  7. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  8. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  9. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  10. Osteochondritis dissecans of the humeral capitellum: reliability of four classification systems using radiographs and computed tomography.

    PubMed

    Claessen, Femke M A P; van den Ende, Kimberly I M; Doornberg, Job N; Guitton, Thierry G; Eygendaal, Denise; van den Bekerom, Michel P J

    2015-10-01

    The radiographic appearance of osteochondritis dissecans (OCD) of the humeral capitellum varies according to the stage of the lesion. It is important to evaluate the stage of OCD lesion carefully to guide treatment. We compared the interobserver reliability of currently used classification systems for OCD of the humeral capitellum to identify the most reliable classification system. Thirty-two musculoskeletal radiologists and orthopaedic surgeons specialized in elbow surgery from several countries evaluated anteroposterior and lateral radiographs and corresponding computed tomography (CT) scans of 22 patients to classify the stage of OCD of the humeral capitellum according to the classification systems developed by (1) Minami, (2) Berndt and Harty, (3) Ferkel and Sgaglione, and (4) Anderson on a Web-based study platform including a Digital Imaging and Communications in Medicine viewer. Magnetic resonance imaging was not evaluated as part of this study. We measured agreement among observers using the Siegel and Castellan multirater κ. All OCD classification systems, except for Berndt and Harty, which had poor agreement among observers (κ = 0.20), had fair interobserver agreement: κ was 0.27 for the Minami, 0.23 for Anderson, and 0.22 for Ferkel and Sgaglione classifications. The Minami Classification was significantly more reliable than the other classifications (P < .001). The Minami Classification was the most reliable for classifying different stages of OCD of the humeral capitellum. However, it is unclear whether radiographic evidence of OCD of the humeral capitellum, as categorized by the Minami Classification, guides treatment in clinical practice as a result of this fair agreement. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  11. Common Cause Failures and Ultra Reliability

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.

  12. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  13. Reliable semiclassical computations in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dine, Michael; Department of Physics, Stanford University Stanford, California 94305-4060; Festuccia, Guido

    We revisit the question of whether or not one can perform reliable semiclassical QCD computations at zero temperature. We study correlation functions with no perturbative contributions, and organize the problem by means of the operator product expansion, establishing a precise criterion for the validity of a semiclassical calculation. For N{sub f}>N, a systematic computation is possible; for N{sub f}computations in the chiral limit.

  14. Reliability and concurrent validity of the computer workstation checklist.

    PubMed

    Baker, Nancy A; Livengood, Heather; Jacobs, Karen

    2013-01-01

    Self-report checklists are used to assess computer workstation set up, typically by workers not trained in ergonomic assessment or checklist interpretation.Though many checklists exist, few have been evaluated for reliability and validity. This study examined reliability and validity of the Computer Workstation Checklist (CWC) to identify mismatches between workers' self-reported workstation problems. The CWC was completed at baseline and at 1 month to establish reliability. Validity was determined with CWC baseline data compared to an onsite workstation evaluation conducted by an expert in computer workstation assessment. Reliability ranged from fair to near perfect (prevalence-adjusted bias-adjusted kappa, 0.38-0.93); items with the strongest agreement were related to the input device, monitor, computer table, and document holder. The CWC had greater specificity (11 of 16 items) than sensitivity (3 of 16 items). The positive predictive value was greater than the negative predictive value for all questions. The CWC has strong reliability. Sensitivity and specificity suggested workers often indicated no problems with workstation setup when problems existed. The evidence suggests that while the CWC may not be valid when used alone, it may be a suitable adjunct to an ergonomic assessment completed by professionals.

  15. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  16. Reliability testing of ultra-low noise InGaAs quad photoreceivers

    NASA Astrophysics Data System (ADS)

    Joshi, Abhay M.; Datta, Shubhashish; Prasad, Narasimha; Sivertz, Michael

    2018-02-01

    We have developed ultra-low noise quadrant InGaAs photoreceivers for multiple applications ranging from Laser Interferometric Gravitional Wave Detection, to 3D Wind Profiling. Devices with diameters of 0.5 mm, 1mm, and 2 mm were processed, with the nominal capacitance of a single quadrant of a 1 mm quad photodiode being 2.5 pF. The 1 mm diameter InGaAs quad photoreceivers, using a low-noise, bipolar-input OpAmp circuitry exhibit an equivalent input noise per quadrant of <1.7 pA/√Hz in 2 to 20 MHz frequency range. The InGaAs Quad Photoreceivers have undergone the following reliability tests: 30 MeV Proton Radiation up to a Total Ionizing Dose (TID) of 50 krad, Mechanical Shock, and Sinusoidal Vibration.

  17. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  18. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  19. A Conceptual Design for a Reliable Optical Bus (ROBUS)

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo

    2002-01-01

    The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.

  20. Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    White, Mark; MacNeal, Kristen; Cooper, Mark

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.

  1. ORCHID - a computer simulation of the reliability of an NDE inspection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moles, M.D.C.

    1987-03-01

    CANDU pressurized heavy water reactors contain several hundred horizontally-mounted zirconium alloy pressure tubes. Following a pressure tube failure, a pressure tube inspection system called CIGARette was rapidly designed, manufactured and put in operation. Defects called hydride blisters were found to be the cause of the failure, and were detected using a combination of eddy current and ultrasonic scans. A number of improvements were made to CIGARette during the inspection period. The ORCHID computer program models the operation of the delivery system, eddy current and ultrasonic systems by imitating the on-reactor decision-making procedure. ORCHID predicts that during the early stage ofmore » development, less than one blistered tube in three would be detected, while less than one in two would be detected in the middle development stage. However, ORCHID predicts that during the late development stage, probability of detection will be over 90%, primarily due to the inclusion of axial ultrasonic scans (a procedural modification). Rotational and axial slip could severely reduce probability of detection. Comparison of CIGARette's inspection data with ORCHID's predictions indicate that the latter are compatible with the actual inspection results, through the numbers are small and data uncertain. It should be emphasized that the CIGARette system has been essentially replaced with the much more reliable CIGAR system.« less

  2. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    PubMed

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel

    2015-01-01

    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  3. Algorithmic Mechanisms for Reliable Crowdsourcing Computation under Collusion

    PubMed Central

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Pareja, Daniel

    2015-01-01

    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers’ decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game. PMID:25793524

  4. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  5. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  6. Reliability Evaluation of Computer Systems.

    DTIC Science & Technology

    1981-01-01

    algorithms in hardware is not restricted by the designs of particular circuits. Applications could be made in new computer architectures; one candidate...pp. 137-148, IEEE, Chicago, Illinois, September 1963. (With J.F. Wakerly ) "Design of Low-Cost General-Purpose Self- Diagnosing Computers," Proc...34 Proc., IEEE Int’l Solid-State Circuits Conference, Philadelphia, Pennsylvania, February 16-18, 1977. (With J.F. Wakerly ) "Microcomputers in the

  7. Fly-by-Wire Systems Enable Safer, More Efficient Flight

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Using the ultra-reliable Apollo Guidance Computer that enabled the Apollo Moon missions, Dryden Flight Research Center engineers, in partnership with industry leaders such as Cambridge, Massachusetts-based Draper Laboratory, demonstrated that digital computers could be used to fly aircraft. Digital fly-by-wire systems have since been incorporated into large airliners, military jets, revolutionary new aircraft, and even cars and submarines.

  8. The art of fault-tolerant system reliability modeling

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1990-01-01

    A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.

  9. Comparative performance analysis for computer aided lung nodule detection and segmentation on ultra-low-dose vs. standard-dose CT

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Rogalla, Patrik; Opfer, Roland; Ekin, Ahmet; Romano, Valentina; Bülow, Thomas

    2006-03-01

    The performance of computer aided lung nodule detection (CAD) and computer aided nodule volumetry is compared between standard-dose (70-100 mAs) and ultra-low-dose CT images (5-10 mAs). A direct quantitative performance comparison was possible, since for each patient both an ultra-low-dose and a standard-dose CT scan were acquired within the same examination session. The data sets were recorded with a multi-slice CT scanner at the Charite university hospital Berlin with 1 mm slice thickness. Our computer aided nodule detection and segmentation algorithms were deployed on both ultra-low-dose and standard-dose CT data without any dose-specific fine-tuning or preprocessing. As a reference standard 292 nodules from 20 patients were visually identified, each nodule both in ultra-low-dose and standard-dose data sets. The CAD performance was analyzed by virtue of multiple FROC curves for different lower thresholds of the nodule diameter. For nodules with a volume-equivalent diameter equal or larger than 4 mm (149 nodules pairs), we observed a detection rate of 88% at a median false positive rate of 2 per patient in standard-dose images, and 86% detection rate in ultra-low-dose images, also at 2 FPs per patient. Including even smaller nodules equal or larger than 2 mm (272 nodules pairs), we observed a detection rate of 86% in standard-dose images, and 84% detection rate in ultra-low-dose images, both at a rate of 5 FPs per patient. Moreover, we observed a correlation of 94% between the volume-equivalent nodule diameter as automatically measured on ultra-low-dose versus on standard-dose images, indicating that ultra-low-dose CT is also feasible for growth-rate assessment in follow-up examinations. The comparable performance of lung nodule CAD in ultra-low-dose and standard-dose images is of particular interest with respect to lung cancer screening of asymptomatic patients.

  10. Co-detection: ultra-reliable nanoparticle-based electrical detection of biomolecules in the presence of large background interference.

    PubMed

    Liu, Yang; Gu, Ming; Alocilja, Evangelyn C; Chakrabartty, Shantanu

    2010-11-15

    An ultra-reliable technique for detecting trace quantities of biomolecules is reported. The technique called "co-detection" exploits the non-linear redundancy amongst synthetically patterned biomolecular logic circuits for deciphering the presence or absence of target biomolecules in a sample. In this paper, we verify the "co-detection" principle on gold-nanoparticle-based conductimetric soft-logic circuits which use a silver-enhancement technique for signal amplification. Using co-detection, we have been able to demonstrate a great improvement in the reliability of detecting mouse IgG at concentration levels that are 10(5) lower than the concentration of rabbit IgG which serves as background interference. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  12. Ultra Safe And Secure Blasting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, M M

    2009-07-27

    The Ultra is a blasting system that is designed for special applications where the risk and consequences of unauthorized demolition or blasting are so great that the use of an extraordinarily safe and secure blasting system is justified. Such a blasting system would be connected and logically welded together through digital code-linking as part of the blasting system set-up and initialization process. The Ultra's security is so robust that it will defeat the people who designed and built the components in any attempt at unauthorized detonation. Anyone attempting to gain unauthorized control of the system by substituting components or tappingmore » into communications lines will be thwarted in their inability to provide encrypted authentication. Authentication occurs through the use of codes that are generated by the system during initialization code-linking and the codes remain unknown to anyone, including the authorized operator. Once code-linked, a closed system has been created. The system requires all components connected as they were during initialization as well as a unique code entered by the operator for function and blasting.« less

  13. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  14. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  15. Optimization of life support systems and their systems reliability

    NASA Technical Reports Server (NTRS)

    Fan, L. T.; Hwang, C. L.; Erickson, L. E.

    1971-01-01

    The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.

  16. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  17. 125Mbps ultra-wideband system evaluation for cortical implant devices.

    PubMed

    Luo, Yi; Winstead, Chris; Chiang, Patrick

    2012-01-01

    This paper evaluates the performance of a 125Mbps Impulse Ratio Ultra-Wideband (IR-UWB) system for cortical implant devices by using low-Q inductive coil link operating in the near-field domain. We examine design tradeoffs between transmitted signal amplitude, reliability, noise and clock jitter. The IR-UWB system is modeled using measured parameters from a reported UWB transceiver implemented in 90nm-CMOS technology. Non-optimized inductive coupling coils with low-Q value for near-field data transmission are modeled in order to build a full channel from the transmitter (Tx) to the receiver (Rx). On-off keying (OOK) modulation is used together with a low-complexity convolutional error correcting code. The simulation results show that even though the low-Q coils decrease the amplitude of the received pulses, the UWB system can still achieve acceptable performance when error correction is used. These results predict that UWB is a good candidate for delivering high data rates in cortical implant devices.

  18. On the reliability of computed chaotic solutions of non-linear differential equations

    NASA Astrophysics Data System (ADS)

    Liao, Shijun

    2009-08-01

    A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.

  19. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  20. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  1. Verification Methodology of Fault-tolerant, Fail-safe Computers Applied to MAGLEV Control Computer Systems

    DOT National Transportation Integrated Search

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...

  2. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  3. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  4. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  5. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  6. Technical description of space ultra reliable modular computer (SUMC), model 2 B

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design features of the SUMC-2B computer, also called the IBM-HTC are described. It is general purpose digital computer implemented with flexible hardware elements and microprograming to enable low cost customizing to a wide range of applications. It executes the S/360 standard instruction set to maintain problem state compability. Memory technology, extended instruction sets, and I/O channel variations are among the available options.

  7. Design Considerations for a Water Treatment System Utilizing Ultra-Violet Light Emitting Diodes

    DTIC Science & Technology

    2014-03-27

    DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING DIODES...the United States. ii AFIT-ENV-14-M-58 DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING DIODES...DISTRIBUTION UNLIMITED. iii AFIT-ENV-14-M-58 DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING

  8. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  9. Virtual Colonoscopy Screening With Ultra Low-Dose CT and Less-Stressful Bowel Preparation: A Computer Simulation Study

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong

    2008-10-01

    Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.

  10. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  11. Preliminary Analysis of LORAN-C System Reliability for Civil Aviation.

    DTIC Science & Technology

    1981-09-01

    overviev of the analysis technique. Section 3 describes the computerized LORAN-C coverage model which is used extensively in the reliability analysis...Xth Plenary Assembly, Geneva, 1963, published by International Telecomunications Union. S. Braff, R., Computer program to calculate a Karkov Chain Reliability Model, unpublished york, MITRE Corporation. A-1 I.° , 44J Ili *Y 0E 00 ...F i8 1110 Prelim inary Analysis of Program Engineering & LORAN’C System ReliabilityMaintenance Service i ~Washington. D.C.

  12. Expert system for UNIX system reliability and availability enhancement

    NASA Astrophysics Data System (ADS)

    Xu, Catherine Q.

    1993-02-01

    Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.

  13. Expert System for UNIX System Reliability and Availability Enhancement

    NASA Technical Reports Server (NTRS)

    Xu, Catherine Q.

    1993-01-01

    Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.

  14. Design of nodes for embedded and ultra low-power wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Xu, Jun; You, Bo; Cui, Juan; Ma, Jing; Li, Xin

    2008-10-01

    Sensor network integrates sensor technology, MEMS (Micro-Electro-Mechanical system) technology, embedded computing, wireless communication technology and distributed information management technology. It is of great value to use it where human is quite difficult to reach. Power consumption and size are the most important consideration when nodes are designed for distributed WSN (wireless sensor networks). Consequently, it is of great importance to decrease the size of a node, reduce its power consumption and extend its life in network. WSN nodes have been designed using JN5121-Z01-M01 module produced by jennic company and IEEE 802.15.4/ZigBee technology. Its new features include support for CPU sleep modes and a long-term ultra low power sleep mode for the entire node. In low power configuration the node resembles existing small low power nodes. An embedded temperature sensor node has been developed to verify and explore our architecture. The experiment results indicate that the WSN has the characteristic of high reliability, good stability and ultra low power consumption.

  15. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  16. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  17. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  18. Multiferroic nanomagnetic logic: Hybrid spintronics-straintronic paradigm for ultra-low energy computing

    NASA Astrophysics Data System (ADS)

    Salehi Fashami, Mohammad

    Excessive energy dissipation in CMOS devices during switching is the primary threat to continued downscaling of computing devices in accordance with Moore's law. In the quest for alternatives to traditional transistor based electronics, nanomagnet-based computing [1, 2] is emerging as an attractive alternative since: (i) nanomagnets are intrinsically more energy-efficient than transistors due to the correlated switching of spins [3], and (ii) unlike transistors, magnets have no leakage and hence have no standby power dissipation. However, large energy dissipation in the clocking circuit appears to be a barrier to the realization of ultra low power logic devices with such nanomagnets. To alleviate this issue, we propose the use of a hybrid spintronics-straintronics or straintronic nanomagnetic logic (SML) paradigm. This uses a piezoelectric layer elastically coupled to an elliptically shaped magnetostrictive nanomagnetic layer for both logic [4-6] and memory [7-8] and other information processing [9-10] applications that could potentially be 2-3 orders of magnitude more energy efficient than current CMOS based devices. This dissertation focuses on studying the feasibility, performance and reliability of such nanomagnetic logic circuits by simulating the nanoscale magnetization dynamics of dipole coupled nanomagnets clocked by stress. Specifically, the topics addressed are: 1. Theoretical study of multiferroic nanomagnetic arrays laid out in specific geometric patterns to implement a "logic wire" for unidirectional information propagation and a universal logic gate [4-6]. 2. Monte Carlo simulations of the magnetization trajectories in a simple system of dipole coupled nanomagnets and NAND gate described by the Landau-Lifshitz-Gilbert (LLG) equations simulated in the presence of random thermal noise to understand the dynamics switching error [11, 12] in such devices. 3. Arriving at a lower bound for energy dissipation as a function of switching error [13] for a

  19. Hawaii Electric System Reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loose, Verne William; Silva Monroy, Cesar Augusto

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  20. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  1. Formal design and verification of a reliable computing platform for real-time control. Phase 2: Results

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.

    1992-01-01

    The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).

  2. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    PubMed

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  3. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  4. System life and reliability modeling for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Brikmanis, C. K.

    1986-01-01

    A computer program which simulates life and reliability of helicopter transmissions is presented. The helicopter transmissions may be composed of spiral bevel gear units and planetary gear units - alone, in series or in parallel. The spiral bevel gear units may have either single or dual input pinions, which are identical. The planetary gear units may be stepped or unstepped and the number of planet gears carried by the planet arm may be varied. The reliability analysis used in the program is based on the Weibull distribution lives of the transmission components. The computer calculates the system lives and dynamic capacities of the transmission components and the transmission. The system life is defined as the life of the component or transmission at an output torque at which the probability of survival is 90 percent. The dynamic capacity of a component or transmission is defined as the output torque which can be applied for one million output shaft cycles for a probability of survival of 90 percent. A complete summary of the life and dynamic capacity results is produced by the program.

  5. Laser System Reliability

    DTIC Science & Technology

    1977-03-01

    system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at

  6. Hawaii electric system reliability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  7. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  8. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  9. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    NASA Technical Reports Server (NTRS)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  10. CCARES: A computer algorithm for the reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1993-01-01

    Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.

  11. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Weatherbee, J. E.; Taylor, D. S.

    1972-01-01

    A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.

  12. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  13. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  14. Reliability of a computer-based system for measuring visual performance skills.

    PubMed

    Erickson, Graham B; Citek, Karl; Cove, Michelle; Wilczek, Jennifer; Linster, Carolyn; Bjarnason, Brendon; Langemo, Nathan

    2011-09-01

    Athletes have demonstrated better visual abilities than nonathletes. A vision assessment for an athlete should include methods to evaluate the quality of visual performance skills in the most appropriate, accurate, and repeatable manner. This study determines the reliability of the visual performance measures assessed with a computer-based system, known as the Nike Sensory Station. One hundred twenty-five subjects (56 men, 69 women), age 18 to 30, completed Phase I of the study. Subjects attended 2 sessions, separated by at least 1 week, in which identical protocols were followed. Subjects completed the following assessments: Visual Clarity, Contrast Sensitivity, Depth Perception, Near-Far Quickness, Target Capture, Perception Span, Eye-Hand Coordination, Go/No Go, and Reaction Time. An additional 36 subjects (20 men, 16 women), age 22 to 35, completed Phase II of the study involving modifications to the equipment, instructions, and protocols from Phase I. Results show no significant change in performance over time on assessments of Visual Clarity, Contrast Sensitivity, Depth Perception, Target Capture, Perception Span, and Reaction Time. Performance did improve over time for Near-Far Quickness, Eye-Hand Coordination, and Go/No Go. The results of this study show that many of the Nike Sensory Station assessments show repeatability and no learning effect over time. The measures that did improve across sessions show an expected learning effect caused by the motor response characteristics being measured. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  15. Reliable Acquisition of RAM Dumps from Intel-Based Apple Mac Computers over FireWire

    NASA Astrophysics Data System (ADS)

    Gladyshev, Pavel; Almansoori, Afrah

    RAM content acquisition is an important step in live forensic analysis of computer systems. FireWire offers an attractive way to acquire RAM content of Apple Mac computers equipped with a FireWire connection. However, the existing techniques for doing so require substantial knowledge of the target computer configuration and cannot be used reliably on a previously unknown computer in a crime scene. This paper proposes a novel method for acquiring RAM content of Apple Mac computers over FireWire, which automatically discovers necessary information about the target computer and can be used in the crime scene setting. As an application of the developed method, the techniques for recovery of AOL Instant Messenger (AIM) conversation fragments from RAM dumps are also discussed in this paper.

  16. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  17. Reliability of a computer and Internet survey (Computer User Profile) used by adults with and without traumatic brain injury (TBI).

    PubMed

    Kilov, Andrea M; Togher, Leanne; Power, Emma

    2015-01-01

    To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.

  18. MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems.

    PubMed

    Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G; Blaauw, David; Dutta, Prabal

    2015-06-01

    As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized-yet reusable-components with an interconnect that permits tiny, ultra-low power systems. In contrast to today's interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus , a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two "shoot-through" rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient's power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm 3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus's feature set.

  19. [An ultra-low power, wearable, long-term ECG monitoring system with mass storage].

    PubMed

    Liu, Na; Chen, Yingmin; Zhang, Wenzan; Luo, Zhangyuan; Jin, Xun; Ying, Weihai

    2012-01-01

    In this paper, we described an ultra-low power, wearable ECG system capable of long term monitoring and mass storage. This system is based on micro-chip PIC18F27J13 with consideration of its high level of integration and low power consumption. The communication with the micro-SD card is achieved through SPI bus. Through the USB, it can be connected to the computer for replay and disease diagnosis. Given its low power cost, lithium cells are used to support continuous ECG acquiring and storage for up to 15 days. Meanwhile, the wearable electrodes avoid the pains and possible risks in implanting. Besides, the mini size of the system makes long wearing possible for patients and meets the needs of long-term dynamic monitoring and mass storage requirements.

  20. Feasibility of an ultra-low power digital signal processor platform as a basis for a fully implantable brain-computer interface system.

    PubMed

    Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.

  1. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  2. Versatile, low-cost, computer-controlled, sample positioning system for vacuum applications

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1991-01-01

    A versatile, low-cost, easy to implement, microprocessor-based motorized positioning system (MPS) suitable for accurate sample manipulation in a Second Ion Mass Spectrometry (SIMS) system, and for other ultra-high vacuum (UHV) applications was designed and built at NASA LeRC. The system can be operated manually or under computer control. In the latter case, local, as well as remote operation is possible via the IEEE-488 bus. The position of the sample can be controlled in three linear orthogonal and one angular coordinates.

  3. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  4. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  5. Theory of reliable systems. [systems analysis and design

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1973-01-01

    The analysis and design of reliable systems are discussed. The attributes of system reliability studied are fault tolerance, diagnosability, and reconfigurability. Objectives of the study include: to determine properties of system structure that are conducive to a particular attribute; to determine methods for obtaining reliable realizations of a given system; and to determine how properties of system behavior relate to the complexity of fault tolerant realizations. A list of 34 references is included.

  6. Field Evaluation of Ultra-High Pressure Water Systems for Runway Rubber Removal

    DTIC Science & Technology

    2014-04-01

    ER D C/ G SL T R- 14 -1 1 Field Evaluation of Ultra-High Pressure Water Systems for Runway Rubber Removal G eo te ch ni ca l a nd S tr...Field Evaluation of Ultra-High Pressure Water Systems for Runway Rubber Removal Aaron B. Pullen Applied Research Associates, Inc. 421 Oak Avenue...collaboration with Applied Research Associates, Inc. (ARA). Several types of commercial UHPW water blasting systems were tested on an ungrooved portland cement

  7. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  8. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  9. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  10. An integrated approach to system design, reliability, and diagnosis

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1990-01-01

    The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.

  11. An integrated approach to system design, reliability, and diagnosis

    NASA Astrophysics Data System (ADS)

    Patterson-Hine, F. A.; Iverson, David L.

    1990-12-01

    The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.

  12. Reliability analysis of redundant systems. [a method to compute transition probabilities

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y.

    1974-01-01

    A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.

  13. MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems

    PubMed Central

    Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G.; Blaauw, David; Dutta, Prabal

    2015-01-01

    As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized—yet reusable—components with an interconnect that permits tiny, ultra-low power systems. In contrast to today’s interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus, a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two “shoot-through” rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient’s power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus’s feature set. PMID:26855555

  14. Ultra-Low-Dropout Linear Regulator

    NASA Technical Reports Server (NTRS)

    Thornton, Trevor; Lepkowski, William; Wilk, Seth

    2011-01-01

    A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.

  15. Bulk electric system reliability evaluation incorporating wind power and demand side management

    NASA Astrophysics Data System (ADS)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed

  16. An assessment of the real-time application capabilities of the SIFT computer system

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  17. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  18. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  19. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  20. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  1. Definition and trade-off study of reconfigurable airborne digital computer system organizations

    NASA Technical Reports Server (NTRS)

    Conn, R. B.

    1974-01-01

    A highly-reliable, fault-tolerant reconfigurable computer system for aircraft applications was developed. The development and application reliability and fault-tolerance assessment techniques are described. Particular emphasis is placed on the needs of an all-digital, fly-by-wire control system appropriate for a passenger-carrying airplane.

  2. The computational challenges of Earth-system science.

    PubMed

    O'Neill, Alan; Steenman-Clark, Lois

    2002-06-15

    The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.

  3. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  4. Fatigue criterion to system design, life and reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.

    1985-01-01

    A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.

  5. Effects of computing time delay on real-time control systems

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  6. Superior model for fault tolerance computation in designing nano-sized circuit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my

    2014-10-24

    As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less

  7. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX

  8. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  9. Interpretive Reliability of Six Computer-Based Test Interpretation Programs for the Minnesota Multiphasic Personality Inventory-2.

    PubMed

    Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E

    2016-04-01

    The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile. © The Author(s) 2015.

  10. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  11. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  12. An integrated approach to system design, reliability, and diagnosis

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1990-01-01

    The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems ingeneering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms.

  13. Techniques for modeling the reliability of fault-tolerant systems with the Markov state-space approach

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1995-01-01

    This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.

  14. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  15. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and

  16. Ultra High Bypass Integrated System Test

    NASA Image and Video Library

    2015-09-14

    NASA’s Environmentally Responsible Aviation Project, in collaboration with the Federal Aviation Administration (FAA) and Pratt & Whitney, completed testing of an Ultra High Bypass Ratio Turbofan Model in the 9’ x 15’ Low Speed Wind Tunnel at NASA Glenn Research Center. The fan model is representative of the next generation of efficient and quiet Ultra High Bypass Ratio Turbofan Engine designs.

  17. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  18. Reliability and Cost Impacts for Attritable Systems

    DTIC Science & Technology

    2017-03-23

    and cost risk metrics to convey the value of reliability and reparability trades. Investigation of the benefit of trading system reparability...illustrates the benefit that reliability engineering can have on total cost . 2.3.1 Contexts of System Reliability Hogge (2012) identifies two distinct...reliability and reparability trades. Investigation of the benefit of trading system reparability shows a marked increase in cost risk. Yet, trades in

  19. Reliable actuators for twin rotor MIMO system

    NASA Astrophysics Data System (ADS)

    Rao, Vidya S.; V. I, George; Kamath, Surekha; Shreesha, C.

    2017-11-01

    Twin Rotor MIMO System (TRMS) is a bench mark system to test flight control algorithms. One of the perturbations on TRMS which is likely to affect the control system is actuator failure. Therefore, there is a need for a reliable control system, which includes H infinity controller along with redundant actuators. Reliable control refers to the design of a control system to tolerate failures of a certain set of actuators or sensors while retaining desired control system properties. Output of reliable controller has to be transferred to the redundant actuator effectively to make the TRMS reliable even under actual actuator failure.

  20. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe

  1. Provable Transient Recovery for Frame-Based, Fault-Tolerant Computing Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present a formal verification of the transient fault recovery aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system architecture for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization accommodates a wide variety of voting schemes for purging the effects of transients.

  2. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX

  3. A highly reliable, autonomous data communication subsystem for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Masotto, Thomas; Alger, Linda

    1990-01-01

    The need to meet the stringent performance and reliability requirements of advanced avionics systems has frequently led to implementations which are tailored to a specific application and are therefore difficult to modify or extend. Furthermore, many integrated flight critical systems are input/output intensive. By using a design methodology which customizes the input/output mechanism for each new application, the cost of implementing new systems becomes prohibitively expensive. One solution to this dilemma is to design computer systems and input/output subsystems which are general purpose, but which can be easily configured to support the needs of a specific application. The Advanced Information Processing System (AIPS), currently under development has these characteristics. The design and implementation of the prototype I/O communication system for AIPS is described. AIPS addresses reliability issues related to data communications by the use of reconfigurable I/O networks. When a fault or damage event occurs, communication is restored to functioning parts of the network and the failed or damage components are isolated. Performance issues are addressed by using a parallelized computer architecture which decouples Input/Output (I/O) redundancy management and I/O processing from the computational stream of an application. The autonomous nature of the system derives from the highly automated and independent manner in which I/O transactions are conducted for the application as well as from the fact that the hardware redundancy management is entirely transparent to the application.

  4. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  5. Formal design and verification of a reliable computing platform for real-time control (phase 3 results)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael

    1994-01-01

    In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.

  6. Ultra-thin carbon-fiber paper fabrication and carbon-fiber distribution homogeneity evaluation method

    NASA Astrophysics Data System (ADS)

    Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.

    2018-01-01

    A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.

  7. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  8. Digital avionics design and reliability analyzer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.

  9. Development and ultra-structure of an ultra-thin silicone epidermis of bioengineered alternative tissue.

    PubMed

    Wessels, Quenton; Pretorius, Etheresia

    2015-08-01

    Burn wound care today has a primary objective of temporary or permanent wound closure. Commercially available engineered alternative tissues have become a valuable adjunct to the treatment of burn injuries. Their constituents can be biological, alloplastic or a combination of both. Here the authors describe the aspects of the development of a siloxane epidermis for a collagen-glycosaminoglycan and for nylon-based artificial skin replacement products. A method to fabricate an ultra-thin epidermal equivalent is described. Pores, to allow the escape of wound exudate, were punched and a tri-filament nylon mesh or collagen scaffold was imbedded and silicone polymerisation followed at 120°C for 5 minutes. The ultra-structure of these bilaminates was assessed through scanning electron microscopy. An ultra-thin biomedical grade siloxane film was reliably created through precision coating on a pre-treated polyethylene terephthalate carrier. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  10. Mechanical system reliability for long life space systems

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1994-01-01

    The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.

  11. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  12. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  13. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  14. Electric system restructuring and system reliability

    NASA Astrophysics Data System (ADS)

    Horiuchi, Catherine Miller

    In 1996 the California legislature passed AB 1890, explicitly defining economic benefits and detailing specific mechanisms for initiating a partial restructuring the state's electric system. Critics have since sought re-regulation and proponents have asked for patience as the new institutions and markets take shape. Other states' electric system restructuring activities have been tempered by real and perceived problems in the California model. This study examines the reduced regulatory controls and new constraints introduced in California's limited restructuring model using utility and regulatory agency records from the 1990's to investigate effects of new institutions and practices on system reliability for the state's five largest public and private utilities. Logit and negative binomial regressions indicate negative impact from the California model of restructuring on system reliability as measured by customer interruptions. Time series analysis of outage data could not predict the wholesale power market collapse and the subsequent rolling blackouts in early 2001; inclusion of near-outage reliability disturbances---load shedding and energy emergencies---provided a measure of forewarning. Analysis of system disruptions, generation capacity and demand, and the role of purchased power challenge conventional wisdom on the causality of Californian's power problems. The quantitative analysis was supplemented by a targeted survey of electric system restructuring participants. Findings suggest each utility and the organization controlling the state's electric grid provided protection from power outages comparable to pre-restructuring operations through 2000; however, this reliability has come at an inflated cost, resulting in reduced system purchases and decreased marginal protection. The historic margin of operating safety has fully eroded, increasing mandatory load shedding and emergency declarations for voluntary and mandatory conservation. Proposed remedies focused

  15. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  16. B-52 stability augmentation system reliability

    NASA Technical Reports Server (NTRS)

    Bowling, T. C.; Key, L. W.

    1976-01-01

    The B-52 SAS (Stability Augmentation System) was developed and retrofitted to nearly 300 aircraft. It actively controls B-52 structural bending, provides improved yaw and pitch damping through sensors and electronic control channels, and puts complete reliance on hydraulic control power for rudder and elevators. The system has experienced over 300,000 flight hours and has exhibited service reliability comparable to the results of the reliability test program. Development experience points out numerous lessons with potential application in the mechanization and development of advanced technology control systems of high reliability.

  17. Characterization of Polyimide Foams for Ultra-Lightweight Space Structures

    NASA Technical Reports Server (NTRS)

    Meador, Michael (Technical Monitor); Hillman, Keithan; Veazie, David R.

    2003-01-01

    Ultra-lightweight materials have played a significant role in nearly every area of human activity ranging from magnetic tapes and artificial organs to atmospheric balloons and space inflatables. The application range of ultra-lightweight materials in past decades has expanded dramatically due to their unsurpassed efficiency in terms of low weight and high compliance properties. A new generation of ultra-lightweight materials involving advanced polymeric materials, such as TEEK (TM) polyimide foams, is beginning to emerge to produce novel performance from ultra-lightweight systems for space applications. As a result, they require that special conditions be fulfilled to ensure adequate structural performance, shape retention, and thermal stability. It is therefore important and essential to develop methodologies for predicting the complex properties of ultra-lightweight foams. To support NASA programs such as the Reusable Launch Vehicle (RLV), Clark Atlanta University, along with SORDAL, Inc., has initiated projects for commercial process development of polyimide foams for the proposed cryogenic tank integrated structure (see figure 1). Fabrication and characterization of high temperature, advanced aerospace-grade polyimide foams and filled foam sandwich composites for specified lifetimes in NASA space applications, as well as quantifying the lifetime of components, are immensely attractive goals. In order to improve the development, durability, safety, and life cycle performance of ultra-lightweight polymeric foams, test methods for the properties are constant concerns in terms of timeliness, reliability, and cost. A major challenge is to identify the mechanisms of failures (i.e., core failure, interfacial debonding, and crack development) that are reflected in the measured properties. The long-term goal of the this research is to develop the tools and capabilities necessary to successfully engineer ultra-lightweight polymeric foams. The desire is to reduce density

  18. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  19. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  20. 75 FR 72664 - System Personnel Training Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ...Under section 215 of the Federal Power Act, the Commission approves two Personnel Performance, Training and Qualifications (PER) Reliability Standards, PER-004-2 (Reliability Coordination--Staffing) and PER-005-1 (System Personnel Training), submitted to the Commission for approval by the North American Electric Reliability Corporation, the Electric Reliability Organization certified by the Commission. The approved Reliability Standards require reliability coordinators, balancing authorities, and transmission operators to establish a training program for their system operators, verify each of their system operators' capability to perform tasks, and provide emergency operations training to every system operator. The Commission also approves NERC's proposal to retire two existing PER Reliability Standards that are replaced by the standards approved in this Final Rule.

  1. Ultra-low magnetic damping in metallic and half-metallic systems

    NASA Astrophysics Data System (ADS)

    Shaw, Justin

    The phenomenology of magnetic damping is of critical importance to devices which seek to exploit the electronic spin degree of freedom since damping strongly affects the energy required and speed at which a device can operate. However, theory has struggled to quantitatively predict the damping, even in common ferromagnetic materials. This presents a challenge for a broad range of applications in magnonics, spintronics and spin-orbitronics that depend on the ability to precisely control the damping of a material. I will discuss our recent work to precisely measure the intrinsic damping in several metallic and half-metallic material systems and compare experiment with several theoretical models. This investigation uncovered a metallic material composed of Co and Fe that exhibit ultra-low values of damping that approach values found in thin film YIG. Such ultra-low damping is unexpected in a metal since magnon-electron scattering dominates the damping in conductors. However, this system possesses a distinctive feature in the bandstructure that minimizes the density of states at the Fermi energy n(EF). These findings provide the theoretical framework by which such ultra-low damping can be achieved in metallic ferromagnets and may enable a new class of experiments where ultra-low damping can be combined with a charge current. Half-metallic Heusler compounds by definition have a bandgap in one of the spin channels at the Fermi energy. This feature can also lead to exceptionally low values of the damping parameter. Our results show a strong correlation of the damping with the order parameter in Co2MnGe. Finally, I will provide an overview of the recent advances in achieving low damping in thin film Heusler compounds.

  2. Ultra-processed products are becoming dominant in the global food system.

    PubMed

    Monteiro, C A; Moubarac, J-C; Cannon, G; Ng, S W; Popkin, B

    2013-11-01

    The relationship between the global food system and the worldwide rapid increase of obesity and related diseases is not yet well understood. A reason is that the full impact of industrialized food processing on dietary patterns, including the environments of eating and drinking, remains overlooked and underestimated. Many forms of food processing are beneficial. But what is identified and defined here as ultra-processing, a type of process that has become increasingly dominant, at first in high-income countries, and now in middle-income countries, creates attractive, hyper-palatable, cheap, ready-to-consume food products that are characteristically energy-dense, fatty, sugary or salty and generally obesogenic. In this study, the scale of change in purchase and sales of ultra-processed products is examined and the context and implications are discussed. Data come from 79 high- and middle-income countries, with special attention to Canada and Brazil. Results show that ultra-processed products dominate the food supplies of high-income countries, and that their consumption is now rapidly increasing in middle-income countries. It is proposed here that the main driving force now shaping the global food system is transnational food manufacturing, retailing and fast food service corporations whose businesses are based on very profitable, heavily promoted ultra-processed products, many in snack form. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  3. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10-13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  4. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.

    PubMed

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-26

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  5. Photovoltaic power system reliability considerations

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.

    1980-01-01

    This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.

  6. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.

  8. Reliability Considerations for Ultra- Low Power Space Applications

    NASA Technical Reports Server (NTRS)

    White, Mark; Johnston, Allan

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub- micron region and ULP devices are sought after. Technology trends, ULP microelectronics, scaling and performance tradeoffs, reliability considerations, and spacecraft environments will be presented from a ULP perspective for space applications.

  9. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  10. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining

  11. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining

  12. Periodically Self Restoring Redundant Systems for VLSI Based Highly Reliable Design,

    DTIC Science & Technology

    1984-01-01

    fault tolerance technique for realizing highly reliable computer systems for critical control applications . However, VL.SI technology has imposed a...operating correctly; failed critical real time control applications . n modules are discarded from the vote. the classical "static" voted redundancy...redundant modules are failure number of InterconnecttIon3. This results In f aree. However, for applications requiring higm modular complexity because

  13. User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems

    NASA Astrophysics Data System (ADS)

    Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue

    In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.

  14. Photovoltaic power system reliability considerations

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.

    1980-01-01

    An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

  15. FDTD computation of human eye exposure to ultra-wideband electromagnetic pulses.

    PubMed

    Simicevic, Neven

    2008-03-21

    With an increase in the application of ultra-wideband (UWB) electromagnetic pulses in the communications industry, radar, biotechnology and medicine, comes an interest in UWB exposure safety standards. Despite an increase of the scientific research on bioeffects of exposure to non-ionizing UWB pulses, characterization of those effects is far from complete. A numerical computational approach, such as a finite-difference time domain (FDTD) method, is required to visualize and understand the complexity of broadband electromagnetic interactions. The FDTD method has almost no limits in the description of the geometrical and dispersive properties of the simulated material, it is numerically robust and appropriate for current computer technology. In this paper, a complete calculation of exposure of the human eye to UWB electromagnetic pulses in the frequency range of 3.1-10.6, 22-29 and 57-64 GHz is performed. Computation in this frequency range required a geometrical resolution of the eye of 0.1 mm and an arbitrary precision in the description of its dielectric properties in terms of the Debye model. New results show that the interaction of UWB pulses with the eye tissues exhibits the same properties as the interaction of the continuous electromagnetic waves (CWs) with the frequencies from the pulse's frequency spectrum. It is also shown that under the same exposure conditions the exposure to UWB pulses is from one to many orders of magnitude safer than the exposure to CW.

  16. Diagnostic reliability of MMPI-2 computer-based test interpretations.

    PubMed

    Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E

    2014-09-01

    Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Determination of Turboprop Reduction Gearbox System Fatigue Life and Reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Lewicki, David G.; Savage, Michael; Vlcek, Brian L.

    2007-01-01

    Two computational models to determine the fatigue life and reliability of a commercial turboprop gearbox are compared with each other and with field data. These models are (1) Monte Carlo simulation of randomly selected lives of individual bearings and gears comprising the system and (2) two-parameter Weibull distribution function for bearings and gears comprising the system using strict-series system reliability to combine the calculated individual component lives in the gearbox. The Monte Carlo simulation included the virtual testing of 744,450 gearboxes. Two sets of field data were obtained from 64 gearboxes that were first-run to removal for cause, were refurbished and placed back in service, and then were second-run until removal for cause. A series of equations were empirically developed from the Monte Carlo simulation to determine the statistical variation in predicted life and Weibull slope as a function of the number of gearboxes failed. The resultant L(sub 10) life from the field data was 5,627 hr. From strict-series system reliability, the predicted L(sub 10) life was 774 hr. From the Monte Carlo simulation, the median value for the L(sub 10) gearbox lives equaled 757 hr. Half of the gearbox L(sub 10) lives will be less than this value and the other half more. The resultant L(sub 10) life of the second-run (refurbished) gearboxes was 1,334 hr. The apparent load-life exponent p for the roller bearings is 5.2. Were the bearing lives to be recalculated with a load-life exponent p equal to 5.2, the predicted L(sub 10) life of the gearbox would be equal to the actual life obtained in the field. The component failure distribution of the gearbox from the Monte Carlo simulation was nearly identical to that using the strict-series system reliability analysis, proving the compatibility of these methods.

  18. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    PubMed

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  19. Reliability Analysis of the MSC System

    NASA Astrophysics Data System (ADS)

    Kim, Young-Soo; Lee, Do-Kyoung; Lee, Chang-Ho; Woo, Sun-Hee

    2003-09-01

    MSC (Multi-Spectral Camera) is the payload of KOMPSAT-2, which is being developed for earth imaging in optical and near-infrared region. The design of the MSC is completed and its reliability has been assessed from part level to the MSC system level. The reliability was analyzed in worst case and the analysis results showed that the value complies the required value of 0.9. In this paper, a calculation method of reliability for the MSC system is described, and assessment result is presented and discussed.

  20. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1977-01-01

    Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.

  1. Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly

    PubMed Central

    Maier, Andrea B.; Aarts, Ronald G. K. M.; van Gerven, Joop M. A.; Arendzen, J. Hans; Schouten, Alfred C.; Meskers, Carel G. M.; van der Kooij, Herman

    2016-01-01

    Objectives System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques. Methods In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom) was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory), systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID), standard error of measurement (SEM) and minimal detectable change (MDC). Results A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged. Conclusion This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at

  2. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  3. Characterization of electrical noise limits in ultra-stable laser systems.

    PubMed

    Zhang, J; Shi, X H; Zeng, X Y; Lü, X L; Deng, K; Lu, Z H

    2016-12-01

    We demonstrate thermal noise limited and shot noise limited performance of ultra-stable diode laser systems. The measured heterodyne beat linewidth between such two independent diode lasers reaches 0.74 Hz. The frequency instability of one single laser approaches 1.0 × 10 -15 for averaging time between 0.3 s and 10 s, which is close to the thermal noise limit of the reference cavity. Taking advantage of these two ultra-stable laser systems, we systematically investigate the ultimate electrical noise contributions, and derive expressions for the closed-loop spectral density of laser frequency noise. The measured power spectral density of the beat frequency is compared with the theoretically calculated closed-loop spectral density of the laser frequency noise, and they agree very well. It illustrates the power and generality of the derived closed-loop spectral density formula of the laser frequency noise. Our result demonstrates that a 10 -17 level locking in a wide frequency range is feasible with careful design.

  4. Reliability of computer-assisted periacetabular osteotomy using a minimally invasive approach.

    PubMed

    De Raedt, Sepp; Mechlenburg, Inger; Stilling, Maiken; Rømer, Lone; Murphy, Ryan J; Armand, Mehran; Lepistö, Jyri; de Bruijne, Marleen; Søballe, Kjeld

    2018-06-06

    Periacetabular osteotomy (PAO) is the treatment of choice for younger patients with developmental hip dysplasia. The procedure aims to normalize the joint configuration, reduce the peak-pressure, and delay the development of osteoarthritis. The procedure is technically demanding and no previous study has validated the use of computer navigation with a minimally invasive transsartorial approach. Computer-assisted PAO was performed on ten patients. Patients underwent pre- and postoperative computed tomography (CT) scanning with a standardized protocol. Preoperative preparation consisted of outlining the lunate surface and segmenting the pelvis and femur from CT data. The Biomechanical Guidance System was used intra-operatively to automatically calculate diagnostic angles and peak-pressure measurements. Manual diagnostic angle measurements were performed based on pre- and postoperative CT. Differences in angle measurements were investigated with summary statistics, intraclass correlation coefficient, and Bland-Altman plots. The percentage postoperative change in peak-pressure was calculated. Intra-operative reported angle measurements show a good agreement with manual angle measurements with intraclass correlation coefficient between 0.94 and 0.98. Computer navigation reported angle measurements were significantly higher for the posterior sector angle ([Formula: see text], [Formula: see text]) and the acetabular anteversion angle ([Formula: see text], [Formula: see text]). No significant difference was found for the center-edge ([Formula: see text]), acetabular index ([Formula: see text]), and anterior sector angle ([Formula: see text]). Peak-pressure after PAO decreased by a mean of 13% and was significantly different ([Formula: see text]). We found that computer navigation can reliably be used with a minimally invasive transsartorial approach PAO. Angle measurements generally agree with manual measurements and peak-pressure was shown to decrease postoperatively

  5. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  6. RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)

    EPA Science Inventory

    Abstract

    It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...

  7. New Mobile Lidar Systems Aboard Ultra-Light Aircrafts

    NASA Astrophysics Data System (ADS)

    Chazette, Patrick; Shang, Xiaoxia; Totems, Julien; Marnas, Fabien; Sanak, Joseph

    2013-04-01

    Two lidar systems embedded on ultra light aircraft (ULA) flew over the Rhone valley, south-east of France, to characterize the vertical extend of pollution aerosols in this area influenced by large industrial sites. The main industrial source is the Etang de Berre (43°28' N, 5°01' E), close to Marseille city. The emissions are mainly due to metallurgy and petrochemical factories. Traffic related to Marseille's area contribute to pollution with its ~1500000 inhabitants. Note that the maritime traffic close to Marseille may play an important role due to its position as the leading French harbor . For the previous scientific purpose and for the first time on ULA, we flew a mini-N2 Raman lidar system to help the assessment of the aerosol optical properties. Another Ultra-Violet Rayleigh-Mie lidar has been integrated aboard a second ULA. The lidars are compact and eye safe instruments. They operate at the wavelength of 355 nm with a sampling along the line-of-sight of 0.75 m. Different flights plans were tested to use the two lidars in synergy. We will present the different approaches and discuss both their advantages and limitations. Acknowledgements: the lidar systems have been developed by CEA. They have been deployed with the support of FERRING France. We acknowledge the ULA pilots Franck Toussaint, François Bernard and José Coutet, and the Air Creation ULA Company for logistical help during the ULA campaign.

  8. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  9. JPRS Report, Science & Technology, USSR: Computers, Control Systems and Machines

    DTIC Science & Technology

    1989-03-14

    optimizatsii slozhnykh sistem (Coding Theory and Complex System Optimization ). Alma-Ata, Nauka Press, 1977, pp. 8-16. 11. Author’s certificate number...Interpreter Specifics [0. I. Amvrosova] ............................................. 141 Creation of Modern Computer Systems for Complex Ecological...processor can be designed to decrease degradation upon failure and assure more reliable processor operation, without requiring more complex software or

  10. Demand Response For Power System Reliability: FAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Brendan J

    2006-12-01

    Demand response is the most underutilized power system reliability resource in North America. Technological advances now make it possible to tap this resource to both reduce costs and improve. Misconceptions concerning response capabilities tend to force loads to provide responses that they are less able to provide and often prohibit them from providing the most valuable reliability services. Fortunately this is beginning to change with some ISOs making more extensive use of load response. This report is structured as a series of short questions and answers that address load response capabilities and power system reliability needs. Its objective is tomore » further the use of responsive load as a bulk power system reliability resource in providing the fastest and most valuable ancillary services.« less

  11. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  12. System reliability and recovery.

    DOT National Transportation Integrated Search

    1971-06-01

    The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...

  13. Design of an ultra-thin dual band infrared system

    NASA Astrophysics Data System (ADS)

    Du, Ke; Cheng, Xuemin; Lv, Qichao; Hu, YiFei

    2014-11-01

    The ultra-thin imaging system using reflective multiple-fold structure has smaller volume and less weight while maintaining high resolution compared with conventional optical systems. The multi-folded approach can significantly extend focal distance within wide spectral range without incurring chromatic aberrations. In this paper, we present a dual infrared imaging system of four-folded reflection with two air-spaced concentric reflective surfaces. The dual brand IR system has 107mm effective focal length, 0.7NA, +/-4° FOV, and 50mm effective aperture with 80mm outer diameter into a 25mm total thickness, which spectral response is 3~12μm.

  14. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  15. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides

  16. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    ERIC Educational Resources Information Center

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  17. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  18. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  19. Ultra-smooth finishing of aspheric surfaces using CAST technology

    NASA Astrophysics Data System (ADS)

    Kong, John; Young, Kevin

    2014-06-01

    Growing applications for astronomical ground-based adaptive systems and air-born telescope systems demand complex optical surface designs combined with ultra-smooth finishing. The use of more sophisticated and accurate optics, especially aspheric ones, allows for shorter optical trains with smaller sizes and a reduced number of components. This in turn reduces fabrication and alignment time and costs. These aspheric components include the following: steep surfaces with large aspheric departures; more complex surface feature designs like stand-alone off-axis-parabola (OAP) and free form optics that combine surface complexity with a requirement for ultra-high smoothness, as well as special optic materials such as lightweight silicon carbide (SiC) for air-born systems. Various fabrication technologies for finishing ultra-smooth aspheric surfaces are progressing to meet these growing and demanding challenges, especially Magnetorheological Finishing (MRF) and ion-milling. These methods have demonstrated some good success as well as a certain level of limitations. Amongst them, computer-controlled asphere surface-finishing technology (CAST), developed by Precision Asphere Inc. (PAI), plays an important role in a cost effective manufacturing environment and has successfully delivered numerous products for the applications mentioned above. One of the most recent successes is the Gemini Planet Imager (GPI), the world's most powerful planet-hunting instrument, with critical aspheric components (seven OAPs and free form optics) made using CAST technology. GPI showed off its first images in a press release on January 7, 2014 . This paper reviews features of today's technologies in handling the ultra-smooth aspheric optics, especially the capabilities of CAST on these challenging products. As examples, three groups of aspheres deployed in astronomical optics systems, both polished and finished using CAST, will be discussed in detail.

  20. Reliability studies of Integrated Modular Engine system designs

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-01-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  1. Reliability studies of integrated modular engine system designs

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-01-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  2. Reliability studies of integrated modular engine system designs

    NASA Astrophysics Data System (ADS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-06-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  3. Reliability studies of Integrated Modular Engine system designs

    NASA Astrophysics Data System (ADS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-06-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  4. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  5. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  6. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis; Alan Black; Homer Robertson

    2006-03-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ultra-high rotary speed drilling system is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 October 2004 through 30 September 2005. Additionally, research activity from 1 October 2005 through 28 February 2006 is included in this report: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek designed and planned Phase I bench scale experiments. Some difficulties continue in obtaining ultra-high speed motors. Improvements have been made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs have been provided to vendors for production. A more consistent product is required to minimize the differences in bit performance. A test matrix for the final core bit testing program

  7. Advanced optical systems for ultra high energy cosmic rays detection

    NASA Astrophysics Data System (ADS)

    Gambicorti, L.; Pace, E.; Mazzinghi, P.

    2017-11-01

    A new advanced optical system is proposed and analysed in this work with the purpose to improve the photons collection efficiency of Multi-AnodePhotoMultipliers (MAPMT) detectors, which will be used to cover large focal surface of instruments dedicated to the Ultra High Energy Cosmic Rays (UHECRs, above 1019eV) and Ultra High Energy Neutrino (UHEN) detection. The employment of the advanced optical system allows to focus all photons inside the sensitive area of detectors and to improve the signal-to-noise ratios in the wavelength range of interest (300-400nm), thus coupling imaging and filtering capability. Filter is realised with a multilayer coating to reach high transparency in UV range and with a sharp cut-off outside. In this work the applications on different series of PMTs have been studied and results of simulations are shown. First prototypes have been realised. Finally, this paper proposes another class of adapters to be optically coupled on each pixel of MAPMT detector selected, consisting of non-imaging concentrators as Winston cones.

  8. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  9. Approach to developing reliable space reactor power systems

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Shinbrot, Charles H.

    1991-01-01

    During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.

  10. Design and simulation of the direct drive servo system

    NASA Astrophysics Data System (ADS)

    Ren, Changzhi; Liu, Zhao; Song, Libin; Yi, Qiang; Chen, Ken; Zhang, Zhenchao

    2010-07-01

    As direct drive technology is finding their way into telescope drive designs for its many advantages, it would push to more reliable and cheaper solutions for future telescope complex motion system. However, the telescope drive system based on the direct drive technology is one high integrated electromechanical system, which one complex electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The telescope is one ultra-exact, ultra-speed, high precision and huge inertial instrument, which the direct torque motor adopted by the telescope drive system is different from traditional motor. This paper explores the design process and some simulation results are discussed.

  11. 78 FR 44475 - Protection System Maintenance Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-24

    ... Protection System Maintenance--Phase 2 (Reclosing Relays)). 12. NERC states that the proposed Reliability... of the relay inputs and outputs that are essential to proper functioning of the protection system...] Protection System Maintenance Reliability Standard AGENCY: Federal Energy Regulatory Commission, Energy...

  12. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  13. Ultra-Large Solar Sail

    NASA Technical Reports Server (NTRS)

    Burton, Rodney; Coverstone, Victoria

    2009-01-01

    UltraSail is a next-generation ultra-large (km2 class) sail system. Analysis of the launch, deployment, stabilization, and control of these sails shows that high-payload-mass fractions for interplanetary and deep-space missions are possible. UltraSail combines propulsion and control systems developed for formation-flying microsatellites with a solar sail architecture to achieve controllable sail areas approaching 1 km2. Electrically conductive CP-1 polyimide film results in sail subsystem area densities as low as 5 g/m2. UltraSail produces thrust levels many times those of ion thrusters used for comparable deep-space missions. The primary innovation involves the near-elimination of sail-supporting structures by attaching each blade tip to a formation- flying microsatellite, which deploys the sail and then articulates the sail to provide attitude control, including spin stabilization and precession of the spin axis. These microsatellite tips are controlled by microthrusters for sail-film deployment and mission operations. UltraSail also avoids the problems inherent in folded sail film, namely stressing, yielding, or perforating, by storing the film in a roll for launch and deployment. A 5-km long by 2 micrometer thick film roll on a mandrel with a 1 m circumference (32 cm diameter) has a stored thickness of 5 cm. A 5 m-long mandrel can store a film area of 25,000 m2, and a four-blade system has an area of 0.1 sq km.

  14. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  15. Ultra Small Integrated Optical Fiber Sensing System

    PubMed Central

    Van Hoe, Bram; Lee, Graham; Bosman, Erwin; Missinne, Jeroen; Kalathimekkad, Sandeep; Maskery, Oliver; Webb, David J.; Sugden, Kate; Van Daele, Peter; Van Steenberge, Geert

    2012-01-01

    This paper introduces a revolutionary way to interrogate optical fiber sensors based on fiber Bragg gratings (FBGs) and to integrate the necessary driving optoelectronic components with the sensor elements. Low-cost optoelectronic chips are used to interrogate the optical fibers, creating a portable dynamic sensing system as an alternative for the traditionally bulky and expensive fiber sensor interrogation units. The possibility to embed these laser and detector chips is demonstrated resulting in an ultra thin flexible optoelectronic package of only 40 μm, provided with an integrated planar fiber pigtail. The result is a fully embedded flexible sensing system with a thickness of only 1 mm, based on a single Vertical-Cavity Surface-Emitting Laser (VCSEL), fiber sensor and photodetector chip. Temperature, strain and electrodynamic shaking tests have been performed on our system, not limited to static read-out measurements but dynamically reconstructing full spectral information datasets.

  16. Test experience on an ultrareliable computer communication network

    NASA Technical Reports Server (NTRS)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  17. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  18. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  19. Estimating the Reliability of the CITAR Computer Courseware Evaluation System.

    ERIC Educational Resources Information Center

    Micceri, Theodore

    In today's complex computer-based teaching (CBT)/computer-assisted instruction market, flashy presentations frequently prove the most important purchasing element, while instructional design and content are secondary to form. Courseware purchasers must base decisions upon either a vendor's presentation or some published evaluator rating.…

  20. Assuring long-term reliability of concentrator PV systems

    NASA Astrophysics Data System (ADS)

    McConnell, R.; Garboushian, V.; Brown, J.; Crawford, C.; Darban, K.; Dutra, D.; Geer, S.; Ghassemian, V.; Gordon, R.; Kinsey, G.; Stone, K.; Turner, G.

    2009-08-01

    Concentrator PV (CPV) systems have attracted significant interest because these systems incorporate the world's highest efficiency solar cells and they are targeting the lowest cost production of solar electricity for the world's utility markets. Because these systems are just entering solar markets, manufacturers and customers need to assure their reliability for many years of operation. There are three general approaches for assuring CPV reliability: 1) field testing and development over many years leading to improved product designs, 2) testing to internationally accepted qualification standards (especially for new products) and 3) extended reliability tests to identify critical weaknesses in a new component or design. Amonix has been a pioneer in all three of these approaches. Amonix has an internal library of field failure data spanning over 15 years that serves as the basis for its seven generations of CPV systems. An Amonix product served as the test CPV module for the development of the world's first qualification standard completed in March 2001. Amonix staff has served on international standards development committees, such as the International Electrotechnical Commission (IEC), in support of developing CPV standards needed in today's rapidly expanding solar markets. Recently Amonix employed extended reliability test procedures to assure reliability of multijunction solar cell operation in its seventh generation high concentration PV system. This paper will discuss how these three approaches have all contributed to assuring reliability of the Amonix systems.

  1. Designing magnetic systems for reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less

  2. Reliability analysis of airship remote sensing system

    NASA Astrophysics Data System (ADS)

    Qin, Jun

    1998-08-01

    Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

  3. Architectural requirements for the Red Storm computing system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, William J.; Tomkins, James Lee

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latencymore » interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.« less

  4. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  5. Reliability and validity of a brief questionnaire to assess television viewing and computer use by middle school children.

    PubMed

    Schmitz, Kathryn H; Harnack, Lisa; Fulton, Janet E; Jacobs, David R; Gao, Shujun; Lytle, Leslie A; Van Coevering, Pam

    2004-11-01

    Sedentary behaviors, like television viewing, are positively associated with overweight among young people. To monitor national health objectives for sedentary behaviors in young adolescents, this project developed and assessed the reliability and validity of a brief questionnaire to measure weekly television viewing, usual television viewing, and computer use by middle school children. Reliability and validity of the Youth Risk Behavior Survey (YRBS) question on weekday television viewing also were examined. A brief, five-item television and computer use questionnaire was completed twice by 245 middle school children with one week apart. To concurrently assess validity, students also completed television and computer use logs for seven days. Among all students, Spearman correlations for test-retest reliability for television viewing and computer use ranged from 0.55 to 0.68. Spearman correlations between the first questionnaire and the seven-day log produced the following results: YRBS question for weekday television viewing (0.46), weekend television viewing (0.37), average television viewing over the week (0.47), and computer use (0.39). Methods comparison analysis showed a mean difference (hours/week) between answers to questionnaire items and the log of -0.04 (1.70 standard deviation [SD]) hours for weekday television, -0.21 (2.54 SD) for weekend television, -0.09 (1.75 SD) for average television over the week, and 0.68 (1.26 SD) for computer use. The YRBS weekday television viewing question, and the newly developed questions to assess weekend television viewing, average television viewing, and computer use, produced adequate reliability and validity for surveillance of middle school students.

  6. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    PubMed

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  7. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  8. Closed-form solutions of performability. [in computer systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.

  9. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  10. Efficient free energy calculations of quantum systems through computer simulations

    NASA Astrophysics Data System (ADS)

    Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo

    2009-03-01

    In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)

  11. Ultra-Wideband Angle-of-Arrival Tracking Systems

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey; Ngo, Phong H.; Phan, Chau T.; Gross, Julia; Ni, Jianjun; Dusl, John

    2010-01-01

    Systems that measure the angles of arrival of ultra-wideband (UWB) radio signals and perform triangulation by use of those angles in order to locate the sources of those signals are undergoing development. These systems were originally intended for use in tracking UWB-transmitter-equipped astronauts and mobile robots on the surfaces of remote planets during early stages of exploration, before satellite-based navigation systems become operational. On Earth, these systems could be adapted to such uses as tracking UWB-transmitter-equipped firefighters inside buildings or in outdoor wildfire areas obscured by smoke. The same characteristics that have made UWB radio advantageous for fine resolution ranging, covert communication, and ground-penetrating radar applications in military and law-enforcement settings also contribute to its attractiveness for the present tracking applications. In particular, the waveform shape and the short duration of UWB pulses make it possible to attain the high temporal resolution (of the order of picoseconds) needed to measure angles of arrival with sufficient precision, and the low power spectral density of UWB pulses enables UWB radio communication systems to operate in proximity to other radio communication systems with little or no perceptible mutual interference.

  12. COPES Report: System Reliability Study.

    ERIC Educational Resources Information Center

    Foothill-De Anza Community Coll. District, Los Altos Hills, CA.

    The study examines the reliability of the Community College Occupational Programs Evaluation System (COPES). The COPES process is a system for evaluating program strengths and needs. A two-way test, college self-appraisal with third party validation of the self-appraisal, is utilized to assist community colleges in future institutional planning…

  13. Optimal reconfiguration strategy for a degradable multimodule computing system

    NASA Technical Reports Server (NTRS)

    Lee, Yann-Hang; Shin, Kang G.

    1987-01-01

    The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.

  14. Interobserver reliability of the young-burgess and tile classification systems for fractures of the pelvic ring.

    PubMed

    Koo, Henry; Leveridge, Mike; Thompson, Charles; Zdero, Rad; Bhandari, Mohit; Kreder, Hans J; Stephen, David; McKee, Michael D; Schemitsch, Emil H

    2008-07-01

    The purpose of this study was to measure interobserver reliability of 2 classification systems of pelvic ring fractures and to determine whether computed tomography (CT) improves reliability. The reliability of several radiographic findings was also tested. Thirty patients taken from a database at a Level I trauma facility were reviewed. For each patient, 3 radiographs (AP pelvis, inlet, and outlet) and CT scans were available. Six different reviewers (pelvic and acetabular specialist, orthopaedic traumatologist, or orthopaedic trainee) classified the injury according to Young-Burgess and Tile classification systems after reviewing plain radiographs and then after CT scans. The Kappa coefficient was used to determine interobserver reliability of these classification systems before and after CT scan. For plain radiographs, overall Kappa values for the Young-Burgess and Tile classification systems were 0.72 and 0.30, respectively. For CT scan and plain radiographs, the overall Kappa values for the Young-Burgess and Tile classification systems were 0.63 and 0.33, respectively. The pelvis/acetabular surgeons demonstrated the highest level of agreement using both classification systems. For individual questions, the addition of CT did significantly improve reviewer interpretation of fracture stability. The pre-CT and post-CT Kappa values for fracture stability were 0.59 and 0.93, respectively. The CT scan can improve the reliability of assessment of pelvic stability because of its ability to identify anatomical features of injury. The Young-Burgess system may be optimal for the learning surgeon. The Tile classification system is more beneficial for specialists in pelvic and acetabular surgery.

  15. Consumers' conceptualization of ultra-processed foods.

    PubMed

    Ares, Gastón; Vidal, Leticia; Allegue, Gimena; Giménez, Ana; Bandeira, Elisa; Moratorio, Ximena; Molina, Verónika; Curutchet, María Rosa

    2016-10-01

    Consumption of ultra-processed foods has been associated with low diet quality, obesity and other non-communicable diseases. This situation makes it necessary to develop educational campaigns to discourage consumers from substituting meals based on unprocessed or minimally processed foods by ultra-processed foods. In this context, the aim of the present work was to investigate how consumers conceptualize the term ultra-processed foods and to evaluate if the foods they perceive as ultra-processed are in concordance with the products included in the NOVA classification system. An online study was carried out with 2381 participants. They were asked to explain what they understood by ultra-processed foods and to list foods that can be considered ultra-processed. Responses were analysed using inductive coding. The great majority of the participants was able to provide an explanation of what ultra-processed foods are, which was similar to the definition described in the literature. Most of the participants described ultra-processed foods as highly processed products that usually contain additives and other artificial ingredients, stressing that they have low nutritional quality and are unhealthful. The most relevant products for consumers' conceptualization of the term were in agreement with the NOVA classification system and included processed meats, soft drinks, snacks, burgers, powdered and packaged soups and noodles. However, some of the participants perceived processed foods, culinary ingredients and even some minimally processed foods as ultra-processed. This suggests that in order to accurately convey their message, educational campaigns aimed at discouraging consumers from consuming ultra-processed foods should include a clear definition of the term and describe some of their specific characteristics, such as the type of ingredients included in their formulation and their nutritional composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  17. Reliability of voxel gray values in cone beam computed tomography for preoperative implant planning assessment.

    PubMed

    Parsa, Azin; Ibrahim, Norliza; Hassan, Bassam; Motroni, Alessandro; van der Stelt, Paul; Wismeijer, Daniel

    2012-01-01

    To assess the reliability of cone beam computed tomography (CBCT) voxel gray value measurements using Hounsfield units (HU) derived from multislice computed tomography (MSCT) as a clinical reference (gold standard). Ten partially edentulous human mandibular cadavers were scanned by two types of computed tomography (CT) modalities: multislice CT and cone beam CT. On MSCT scans, eight regions of interest (ROI) designating the site for preoperative implant placement were selected in each mandible. The datasets from both CT systems were matched using a three-dimensional (3D) registration algorithm. The mean voxel gray values of the region around the implant sites were compared between MSCT and CBCT. Significant differences between the mean gray values obtained by CBCT and HU by MSCT were found. In all the selected ROIs, CBCT showed higher mean values than MSCT. A strong correlation (R=0.968) between mean voxel gray values of CBCT and mean HU of MSCT was determined. Voxel gray values from CBCT deviate from actual HU units. However, a strong linear correlation exists, which may permit deriving actual HU units from CBCT using linear regression models.

  18. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  19. Reliability program requirements for aeronautical and space system contractors

    NASA Technical Reports Server (NTRS)

    1987-01-01

    General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.

  20. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  1. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  2. A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gossman, W. E.

    1986-01-01

    A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.

  3. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis; Homer Robertson; Alan Black

    2006-06-22

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm-usually well below 5,000 rpm. This document details the progress at the end of Phase 1 on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 March 2006 and concluding 30 June 2006. (Note: Results from 1 September 2005 through 28 February 2006 were included in the previous report (see Judzis, Black, and Robertson)). Summarizing the accomplished during Phase 1: {lg_bullet} TerraTek reviewed applicable literature and documentation and convened a project kickoff meeting with Industry Advisors in attendance (see Black and Judzis). {lg_bullet} TerraTek designed and planned Phase I bench scale experiments (See Black and Judzis). Some difficulties continued in obtaining ultra-high speed motors. Improvements were made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs were developed to provided a more

  4. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  5. Quantification of dopamine transporters in the mouse brain using ultra-high resolution single-photon emission tomography.

    PubMed

    Acton, Paul D; Choi, Seok-Rye; Plössl, Karl; Kung, Hank F

    2002-05-01

    Functional imaging of small animals, such as mice and rats, using ultra-high resolution positron emission tomography (PET) and single-photon emission tomography (SPET), is becoming a valuable tool for studying animal models of human disease. While several studies have shown the utility of PET imaging in small animals, few have used SPET in real research applications. In this study we aimed to demonstrate the feasibility of using ultra-high resolution SPET in quantitative studies of dopamine transporters (DAT) in the mouse brain. Four healthy ICR male mice were injected with (mean+/-SD) 704+/-154 MBq [(99m)Tc]TRODAT-1, and scanned using an ultra-high resolution SPET system equipped with pinhole collimators (spatial resolution 0.83 mm at 3 cm radius of rotation). Each mouse had two studies, to provide an indication of test-retest reliability. Reference tissue kinetic modeling analysis of the time-activity data in the striatum and cerebellum was used to quantitate the availability of DAT. A simple equilibrium ratio of striatum to cerebellum provided another measure of DAT binding. The SPET imaging results were compared against ex vivo biodistribution data from the striatum and cerebellum. The mean distribution volume ratio (DVR) from the reference tissue kinetic model was 2.17+/-0.34, with a test-retest reliability of 2.63%+/-1.67%. The ratio technique gave similar results (DVR=2.03+/-0.38, test-retest reliability=6.64%+/-3.86%), and the ex vivo analysis gave DVR=2.32+/-0.20. Correlations between the kinetic model and the ratio technique ( R(2)=0.86, P<0.001) and the ex vivo data ( R(2)=0.92, P=0.04) were both excellent. This study demonstrated clearly that ultra-high resolution SPET of small animals is capable of accurate, repeatable, and quantitative measures of DAT binding, and should open up the possibility of further studies of cerebral binding sites in mice using pinhole SPET.

  6. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  7. Reliability of an interactive computer program for advance care planning.

    PubMed

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

  8. Reliability of an Interactive Computer Program for Advance Care Planning

    PubMed Central

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  9. Design optimization of ultra-high concentrator photovoltaic system using two-stage non-imaging solar concentrator

    NASA Astrophysics Data System (ADS)

    Wong, C.-W.; Yew, T.-K.; Chong, K.-K.; Tan, W.-C.; Tan, M.-H.; Lim, B.-H.

    2017-11-01

    This paper presents a systematic approach for optimizing the design of ultra-high concentrator photovoltaic (UHCPV) system comprised of non-imaging dish concentrator (primary optical element) and crossed compound parabolic concentrator (secondary optical element). The optimization process includes the design of primary and secondary optics by considering the focal distance, spillage losses and rim angle of the dish concentrator. The imperfection factors, i.e. mirror reflectivity of 93%, lens’ optical efficiency of 85%, circumsolar ratio of 0.2 and mirror surface slope error of 2 mrad, were considered in the simulation to avoid the overestimation of output power. The proposed UHCPV system is capable of attaining effective ultra-high solar concentration ratio of 1475 suns and DC system efficiency of 31.8%.

  10. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  11. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  12. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  13. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  14. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  15. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  16. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.

  17. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  18. Reliability issues of free-space communications systems and networks

    NASA Astrophysics Data System (ADS)

    Willebrand, Heinz A.

    2003-04-01

    Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.

  19. Systems Issues In Terrestrial Fiber Optic Link Reliability

    NASA Astrophysics Data System (ADS)

    Spencer, James L.; Lewin, Barry R.; Lee, T. Frank S.

    1990-01-01

    This paper reviews fiber optic system reliability issues from three different viewpoints - availability, operating environment, and evolving technologies. Present availability objectives for interoffice links and for the distribution loop must be re-examined for applications such as the Synchronous Optical Network (SONET), Fiber-to-the-Home (FTTH), and analog services. The hostile operating environments of emerging applications (such as FTTH) must be carefully considered in system design as well as reliability assessments. Finally, evolving technologies might require the development of new reliability testing strategies.

  20. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  1. Interrater reliability of a Pilates movement-based classification system.

    PubMed

    Yu, Kwan Kenny; Tulloch, Evelyn; Hendrick, Paul

    2015-01-01

    To determine the interrater reliability for identification of a specific movement pattern using a Pilates Classification system. Videos of 5 subjects performing specific movement tasks were sent to raters trained in the DMA-CP classification system. Ninety-six raters completed the survey. Interrater reliability for the detection of a directional bias was excellent (Pi = 0.92, and K(free) = 0.89). Interrater reliability for classifying an individual into a specific subgroup was moderate (Pi = 0.64, K(free) = 0.55) however raters who had completed levels 1-4 of the DMA-CP training and reported using the assessment daily demonstrated excellent reliability (Pi = 0.89 and K(free) = 0.87). The reliability of the classification system demonstrated almost perfect agreement in determining the existence of a specific movement pattern and classifying into a subgroup for experienced raters. There was a trend for greater reliability associated with increased levels of training and experience of the raters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Reliability analysis of the F-8 digital fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goodman, H. A.

    1981-01-01

    The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.

  3. Reliability Issues in Stirling Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey; Shah, Ashwin

    2005-01-01

    Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.

  4. Reliability Issues in Stirling Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Schreiber, Jeffrey G.

    2004-01-01

    Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.

  5. HiRel - Reliability/availability integrated workstation tool

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Dugan, Joanne B.

    1992-01-01

    The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.

  6. TIGER reliability analysis in the DSN

    NASA Technical Reports Server (NTRS)

    Gunn, J. M.

    1982-01-01

    The TIGER algorithm, the inputs to the program and the output are described. TIGER is a computer program designed to simulate a system over a period of time to evaluate system reliability and availability. Results can be used in the Deep Space Network for initial spares provisioning and system evaluation.

  7. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  8. Ultra-short ion and neutron pulse production

    DOEpatents

    Leung, Ka-Ngo; Barletta, William A.; Kwan, Joe W.

    2006-01-10

    An ion source has an extraction system configured to produce ultra-short ion pulses, i.e. pulses with pulse width of about 1 .mu.s or less, and a neutron source based on the ion source produces correspondingly ultra-short neutron pulses. To form a neutron source, a neutron generating target is positioned to receive an accelerated extracted ion beam from the ion source. To produce the ultra-short ion or neutron pulses, the apertures in the extraction system of the ion source are suitably sized to prevent ion leakage, the electrodes are suitably spaced, and the extraction voltage is controlled. The ion beam current leaving the source is regulated by applying ultra-short voltage pulses of a suitable voltage on the extraction electrode.

  9. Enhancing ultra-high CPV passive cooling using least-material finned heat sinks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micheli, Leonardo, E-mail: lm409@exeter.ac.uk; Mallick, Tapas K., E-mail: T.K.Mallick@exeter.ac.uk; Fernandez, Eduardo F., E-mail: E.Fernandez-Fernandez2@exeter.ac.uk

    2015-09-28

    Ultra-high concentrating photovoltaic (CPV) systems aim to increase the cost-competiveness of CPV by increasing the concentrations over 2000 suns. In this work, the design of a heat sink for ultra-high concentrating photovoltaic (CPV) applications is presented. For the first time, the least-material approach, widely used in electronics to maximize the thermal dissipation while minimizing the weight of the heat sink, has been applied in CPV. This method has the potential to further decrease the cost of this technology and to keep the multijunction cell within the operative temperature range. The designing procedure is described in the paper and the resultsmore » of a thermal simulation are shown to prove the reliability of the solution. A prediction of the costs is also reported: a cost of 0.151$/W{sub p} is expected for a passive least-material heat sink developed for 4000x applications.« less

  10. Physiology and Pathophysiology in Ultra-Marathon Running

    PubMed Central

    Knechtle, Beat; Nikolaidis, Pantelis T.

    2018-01-01

    In this overview, we summarize the findings of the literature with regards to physiology and pathophysiology of ultra-marathon running. The number of ultra-marathon races and the number of official finishers considerably increased in the last decades especially due to the increased number of female and age-group runners. A typical ultra-marathoner is male, married, well-educated, and ~45 years old. Female ultra-marathoners account for ~20% of the total number of finishers. Ultra-marathoners are older and have a larger weekly training volume, but run more slowly during training compared to marathoners. Previous experience (e.g., number of finishes in ultra-marathon races and personal best marathon time) is the most important predictor variable for a successful ultra-marathon performance followed by specific anthropometric (e.g., low body mass index, BMI, and low body fat) and training (e.g., high volume and running speed during training) characteristics. Women are slower than men, but the sex difference in performance decreased in recent years to ~10–20% depending upon the length of the ultra-marathon. The fastest ultra-marathon race times are generally achieved at the age of 35–45 years or older for both women and men, and the age of peak performance increases with increasing race distance or duration. An ultra-marathon leads to an energy deficit resulting in a reduction of both body fat and skeletal muscle mass. An ultra-marathon in combination with other risk factors, such as extreme weather conditions (either heat or cold) or the country where the race is held, can lead to exercise-associated hyponatremia. An ultra-marathon can also lead to changes in biomarkers indicating a pathological process in specific organs or organ systems such as skeletal muscles, heart, liver, kidney, immune and endocrine system. These changes are usually temporary, depending on intensity and duration of the performance, and usually normalize after the race. In longer ultra

  11. Physiology and Pathophysiology in Ultra-Marathon Running.

    PubMed

    Knechtle, Beat; Nikolaidis, Pantelis T

    2018-01-01

    In this overview, we summarize the findings of the literature with regards to physiology and pathophysiology of ultra-marathon running. The number of ultra-marathon races and the number of official finishers considerably increased in the last decades especially due to the increased number of female and age-group runners. A typical ultra-marathoner is male, married, well-educated, and ~45 years old. Female ultra-marathoners account for ~20% of the total number of finishers. Ultra-marathoners are older and have a larger weekly training volume, but run more slowly during training compared to marathoners. Previous experience (e.g., number of finishes in ultra-marathon races and personal best marathon time) is the most important predictor variable for a successful ultra-marathon performance followed by specific anthropometric (e.g., low body mass index, BMI, and low body fat) and training (e.g., high volume and running speed during training) characteristics. Women are slower than men, but the sex difference in performance decreased in recent years to ~10-20% depending upon the length of the ultra-marathon. The fastest ultra-marathon race times are generally achieved at the age of 35-45 years or older for both women and men, and the age of peak performance increases with increasing race distance or duration. An ultra-marathon leads to an energy deficit resulting in a reduction of both body fat and skeletal muscle mass. An ultra-marathon in combination with other risk factors, such as extreme weather conditions (either heat or cold) or the country where the race is held, can lead to exercise-associated hyponatremia. An ultra-marathon can also lead to changes in biomarkers indicating a pathological process in specific organs or organ systems such as skeletal muscles, heart, liver, kidney, immune and endocrine system. These changes are usually temporary, depending on intensity and duration of the performance, and usually normalize after the race. In longer ultra

  12. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    PubMed

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  13. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  14. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including

  15. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  16. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of

  17. Reliability of intracerebral hemorrhage classification systems: A systematic review.

    PubMed

    Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm

    2016-08-01

    Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.

  18. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  19. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2005-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.

  20. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  1. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  2. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  3. Reliability Standards of Complex Engineering Systems

    NASA Astrophysics Data System (ADS)

    Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.

    2017-11-01

    Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.

  4. Free space optical ultra-wideband communications over atmospheric turbulence channels.

    PubMed

    Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu

    2010-08-02

    A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.

  5. Eye Tracking Based Control System for Natural Human-Computer Interaction

    PubMed Central

    Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528

  6. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    PubMed

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  7. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  8. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  9. Verification methodology for fault-tolerant, fail-safe computers applied to maglev control computer systems. Final report, July 1991-May 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lala, J.H.; Nagle, G.A.; Harper, R.E.

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less

  10. Computer-aided communication satellite system analysis and optimization

    NASA Technical Reports Server (NTRS)

    Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.

    1973-01-01

    The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.

  11. Ultra precision and reliable bonding method

    NASA Technical Reports Server (NTRS)

    Gwo, Dz-Hung (Inventor)

    2001-01-01

    The bonding of two materials through hydroxide-catalyzed hydration/dehydration is achieved at room temperature by applying hydroxide ions to at least one of the two bonding surfaces and by placing the surfaces sufficiently close to each other to form a chemical bond between them. The surfaces may be placed sufficiently close to each other by simply placing one surface on top of the other. A silicate material may also be used as a filling material to help fill gaps between the surfaces caused by surface figure mismatches. A powder of a silica-based or silica-containing material may also be used as an additional filling material. The hydroxide-catalyzed bonding method forms bonds which are not only as precise and transparent as optical contact bonds, but also as strong and reliable as high-temperature frit bonds. The hydroxide-catalyzed bonding method is also simple and inexpensive.

  12. European Workshop Industrical Computer Science Systems approach to design for safety

    NASA Technical Reports Server (NTRS)

    Zalewski, Janusz

    1992-01-01

    This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.

  13. Accuracy, intra- and inter-unit reliability, and comparison between GPS and UWB-based position-tracking systems used for time-motion analyses in soccer.

    PubMed

    Bastida Castillo, Alejandro; Gómez Carmona, Carlos D; De la Cruz Sánchez, Ernesto; Pino Ortega, José

    2018-05-01

    There is interest in the accuracy and inter-unit reliability of position-tracking systems to monitor players. Research into this technology, although relatively recent, has grown exponentially in the last years, and it is difficult to find professional team sport that does not use Global Positioning System (GPS) technology at least. The aim of this study is to know the accuracy of both GPS-based and Ultra Wide Band (UWB)-based systems on a soccer field and their inter- and intra-unit reliability. A secondary aim is to compare them for practical applications in sport science. Following institutional ethical approval and familiarization, 10 healthy and well-trained former soccer players (20 ± 1.6 years, 1.76 ± 0.08 cm, and 69.5 ± 9.8 kg) performed three course tests: (i) linear course, (ii) circular course, and (iii) a zig-zag course, all using UWB and GPS technologies. The average speed and distance covered were compared with timing gates and the real distance as references. The UWB technology showed better accuracy (bias: 0.57-5.85%), test-retest reliability (%TEM: 1.19), and inter-unit reliability (bias: 0.18) in determining distance covered than the GPS technology (bias: 0.69-6.05%; %TEM: 1.47; bias: 0.25) overall. Also, UWB showed better results (bias: 0.09; ICC: 0.979; bias: 0.01) for mean velocity measurement than GPS (bias: 0.18; ICC: 0.951; bias: 0.03).

  14. Autonomous safety and reliability features of the K-1 avionics system

    NASA Astrophysics Data System (ADS)

    Mueller, George E.; Kohrs, Dick; Bailey, Richard; Lai, Gary

    2004-03-01

    Kistler Aerospace Corporation is developing the K-1, a fully reusable, two-stage-to-orbit launch vehicle. Both stages return to the launch site using parachutes and airbags. Initial flight operations will occur from Woomera, Australia. K-1 guidance is performed autonomously. Each stage of the K-1 employs a triplex, fault tolerant avionics architecture, including three fault tolerant computers and three radiation hardened Embedded GPS/INS units with a hardware voter. The K-1 has an Integrated Vehicle Health Management (IVHM) system on each stage residing in the three vehicle computers based on similar systems in commercial aircraft. During first-stage ascent, the IVHM system performs an Instantaneous Impact Prediction (IIP) calculation 25 times per second, initiating an abort in the event the vehicle is outside a predetermined safety corridor for at least 3 consecutive calculations. In this event, commands are issued to terminate thrust, separate the stages, dump all propellant in the first-stage, and initiate a normal landing sequence. The second-stage flight computer calculates its ability to reach orbit along its state vector, initiating an abort sequence similar to the first stage if it cannot. On a nominal mission, following separation, the second-stage also performs calculations to assure its impact point is within a safety corridor. The K-1's guidance and control design is being tested through simulation with hardware-in-the-loop at Draper Laboratory. Kistler's verification strategy assures reliable and safe operation of the K-1.

  15. CONSTRAINTS ON MACHO DARK MATTER FROM COMPACT STELLAR SYSTEMS IN ULTRA-FAINT DWARF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Timothy D.

    2016-06-20

    I show that a recently discovered star cluster near the center of the ultra-faint dwarf galaxy Eridanus II provides strong constraints on massive compact halo objects (MACHOs) of ≳5 M {sub ⊙} as the main component of dark matter. MACHO dark matter will dynamically heat the cluster, driving it to larger sizes and higher velocity dispersions until it dissolves into its host galaxy. The stars in compact ultra-faint dwarf galaxies themselves will be subject to the same dynamical heating; the survival of at least 10 such galaxies places independent limits on MACHO dark matter of masses ≳10 M {sub ⊙}.more » Both Eri II’s cluster and the compact ultra-faint dwarfs are characterized by stellar masses of just a few thousand M {sub ⊙} and half-light radii of 13 pc (for the cluster) and ∼30 pc (for the ultra-faint dwarfs). These systems close the ∼20–100 M {sub ⊙} window of allowed MACHO dark matter and combine with existing constraints from microlensing, wide binaries, and disk kinematics to rule out dark matter composed entirely of MACHOs from ∼10{sup −7} M {sub ⊙} up to arbitrarily high masses.« less

  16. Customer-Driven Reliability Models for Multistate Coherent Systems

    DTIC Science & Technology

    1992-01-01

    AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE

  17. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  18. Ultra-high speed vacuum pump system with first stage turbofan and second stage turbomolecular pump

    DOEpatents

    Jostlein, Hans

    2006-04-04

    An ultra-high speed vacuum pump evacuation system includes a first stage ultra-high speed turbofan and a second stage conventional turbomolecular pump. The turbofan is either connected in series to a chamber to be evacuated, or is optionally disposed entirely within the chamber. The turbofan employs large diameter rotor blades operating at high linear blade velocity to impart an ultra-high pumping speed to a fluid. The second stage turbomolecular pump is fluidly connected downstream from the first stage turbofan. In operation, the first stage turbofan operates in a pre-existing vacuum, with the fluid asserting only small axial forces upon the rotor blades. The turbofan imparts a velocity to fluid particles towards an outlet at a high volume rate, but moderate compression ratio. The second stage conventional turbomolecular pump then compresses the fluid to pressures for evacuation by a roughing pump.

  19. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  20. Intelligent neuroprocessors for in-situ launch vehicle propulsion systems health management

    NASA Technical Reports Server (NTRS)

    Gulati, S.; Tawel, R.; Thakoor, A. P.

    1993-01-01

    Efficacy of existing on-board propulsion systems health management systems (HMS) are severely impacted by computational limitations (e.g., low sampling rates); paradigmatic limitations (e.g., low-fidelity logic/parameter redlining only, false alarms due to noisy/corrupted sensor signatures, preprogrammed diagnostics only); and telemetry bandwidth limitations on space/ground interactions. Ultra-compact/light, adaptive neural networks with massively parallel, asynchronous, fast reconfigurable and fault-tolerant information processing properties have already demonstrated significant potential for inflight diagnostic analyses and resource allocation with reduced ground dependence. In particular, they can automatically exploit correlation effects across multiple sensor streams (plume analyzer, flow meters, vibration detectors, etc.) so as to detect anomaly signatures that cannot be determined from the exploitation of single sensor. Furthermore, neural networks have already demonstrated the potential for impacting real-time fault recovery in vehicle subsystems by adaptively regulating combustion mixture/power subsystems and optimizing resource utilization under degraded conditions. A class of high-performance neuroprocessors, developed at JPL, that have demonstrated potential for next-generation HMS for a family of space transportation vehicles envisioned for the next few decades, including HLLV, NLS, and space shuttle is presented. Of fundamental interest are intelligent neuroprocessors for real-time plume analysis, optimizing combustion mixture-ratio, and feedback to hydraulic, pneumatic control systems. This class includes concurrently asynchronous reprogrammable, nonvolatile, analog neural processors with high speed, high bandwidth electronic/optical I/O interfaced, with special emphasis on NASA's unique requirements in terms of performance, reliability, ultra-high density ultra-compactness, ultra-light weight devices, radiation hardened devices, power stringency

  1. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  2. Toward reliable characterization of functional homogeneity in the human brain: Preprocessing, scan duration, imaging resolution and computational space

    PubMed Central

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.

    2013-01-01

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  3. Test-retest reliability and comparability of paper and computer questionnaires for the Finnish version of the Tampa Scale of Kinesiophobia.

    PubMed

    Koho, P; Aho, S; Kautiainen, H; Pohjolainen, T; Hurri, H

    2014-12-01

    To estimate the internal consistency, test-retest reliability and comparability of paper and computer versions of the Finnish version of the Tampa Scale of Kinesiophobia (TSK-FIN) among patients with chronic pain. In addition, patients' personal experiences of completing both versions of the TSK-FIN and preferences between these two methods of data collection were studied. Test-retest reliability study. Paper and computer versions of the TSK-FIN were completed twice on two consecutive days. The sample comprised 94 consecutive patients with chronic musculoskeletal pain participating in a pain management or individual rehabilitation programme. The group rehabilitation design consisted of physical and functional exercises, evaluation of the social situation, psychological assessment of pain-related stress factors, and personal pain management training in order to regain overall function and mitigate the inconvenience of pain and fear-avoidance behaviour. The mean TSK-FIN score was 37.1 [standard deviation (SD) 8.1] for the computer version and 35.3 (SD 7.9) for the paper version. The mean difference between the two versions was 1.9 (95% confidence interval 0.8 to 2.9). Test-retest reliability was 0.89 for the paper version and 0.88 for the computer version. Internal consistency was considered to be good for both versions. The intraclass correlation coefficient for comparability was 0.77 (95% confidence interval 0.66 to 0.85), indicating substantial reliability between the two methods. Both versions of the TSK-FIN demonstrated substantial intertest reliability, good test-retest reliability, good internal consistency and acceptable limits of agreement, suggesting their suitability for clinical use. However, subjects tended to score higher when using the computer version. As such, in an ideal situation, data should be collected in a similar manner throughout the course of rehabilitation or clinical research. Copyright © 2014 Chartered Society of Physiotherapy. Published

  4. Ionization-Assisted Getter Pumping for Ultra-Stable Trapped Ion Frequency Standards

    NASA Technical Reports Server (NTRS)

    Tjoelker, Robert L.; Burt, Eric A.

    2010-01-01

    A method eliminates (or recovers from) residual methane buildup in getter-pumped atomic frequency standard systems by applying ionizing assistance. Ultra-high stability trapped ion frequency standards for applications requiring very high reliability, and/or low power and mass (both for ground-based and space-based platforms) benefit from using sealed vacuum systems. These systems require careful material selection and system processing (cleaning and high-temperature bake-out). Even under the most careful preparation, residual hydrogen outgassing from vacuum chamber walls typically limits the base pressure. Non-evaporable getter pumps (NEGs) provide a convenient pumping option for sealed systems because of low mass and volume, and no power once activated. An ion gauge in conjunction with a NEG can be used to provide a low mass, low-power method for avoiding the deleterious effects of methane buildup in high-performance frequency standard vacuum systems.

  5. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  6. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  7. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  8. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    PubMed Central

    Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena

    2011-01-01

    This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes. PMID:22346579

  9. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  10. Metroliner Auxiliary Power Electrical System Reliability Study

    DOT National Transportation Integrated Search

    1971-06-01

    The reliability of the electrical system of any vehicle is greatly affected by the way the system is configured. The propulsion and braking systems of a train must be unaffected by failures occurring in the nonessential power areas. With these criter...

  11. Operator adaptation to changes in system reliability under adaptable automation.

    PubMed

    Chavaillaz, Alain; Sauer, Juergen

    2017-09-01

    This experiment examined how operators coped with a change in system reliability between training and testing. Forty participants were trained for 3 h on a complex process control simulation modelling six levels of automation (LOA). In training, participants either experienced a high- (100%) or low-reliability system (50%). The impact of training experience on operator behaviour was examined during a 2.5 h testing session, in which participants either experienced a high- (100%) or low-reliability system (60%). The results showed that most operators did not often switch between LOA. Most chose an LOA that relieved them of most tasks but maintained their decision authority. Training experience did not have a strong impact on the outcome measures (e.g. performance, complacency). Low system reliability led to decreased performance and self-confidence. Furthermore, complacency was observed under high system reliability. Overall, the findings suggest benefits of adaptable automation because it accommodates different operator preferences for LOA. Practitioner Summary: The present research shows that operators can adapt to changes in system reliability between training and testing sessions. Furthermore, it provides evidence that each operator has his/her preferred automation level. Since this preference varies strongly between operators, adaptable automation seems to be suitable to accommodate these large differences.

  12. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings.

    PubMed

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H; Luthardt, Ralph Gunnar; Kuhn, Katharina

    2018-02-01

    The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was -15.7 to 3.5 μm for LMMs and -3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was -1.3 to 2.3 μm. LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still clinically acceptable. LMMs showed very high intra

  13. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings

    PubMed Central

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H.; LUTHARDT, Ralph Gunnar; Kuhn, Katharina

    2018-01-01

    Abstract The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Material and Methods Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). Results For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was −15.7 to 3.5 μm for LMMs and −3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was −1.3 to 2.3 μm. Conclusion LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still

  14. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  15. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  16. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  17. Assessing system reliability and allocating resources: a bayesian approach that integrates multi-level data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graves, Todd L; Hamada, Michael S

    2008-01-01

    Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We apply these tools to examples that were previously solvable only through the use of ingenious approximations, and use genetic algorithms to guide resource allocation.

  18. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  19. Efficient and reliable characterization of the corticospinal system using transcranial magnetic stimulation.

    PubMed

    Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark

    2014-06-01

    The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.

  20. O-Ring sealing arrangements for ultra-high vacuum systems

    DOEpatents

    Kim, Chang-Kyo; Flaherty, Robert

    1981-01-01

    An all metal reusable O-ring sealing arrangement for sealing two concentric tubes in an ultra-high vacuum system. An O-ring of a heat recoverable alloy such as Nitinol is concentrically positioned between protruding sealing rings of the concentric tubes. The O-ring is installed between the tubes while in a stressed martensitic state and is made to undergo a thermally induced transformation to an austenitic state. During the transformation the O-ring expands outwardly and contracts inwardly toward a previously sized austenitic configuration, thereby sealing against the protruding sealing rings of the concentric tubes.

  1. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  2. Toward reliable characterization of functional homogeneity in the human brain: preprocessing, scan duration, imaging resolution and computational space.

    PubMed

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F Xavier; Milham, Michael P

    2013-01-15

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test-retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo's TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Food systems transformations, ultra-processed food markets and the nutrition transition in Asia.

    PubMed

    Baker, Phillip; Friel, Sharon

    2016-12-03

    Attracted by their high economic growth rates, young and growing populations, and increasingly open markets, transnational food and beverage corporations (TFBCs) are targeting Asian markets with vigour. Simultaneously the consumption of ultra-processed foods high in fat, salt and glycaemic load is increasing in the region. Evidence demonstrates that TFBCs can leverage their market power to shape food systems in ways that alter the availability, price, nutritional quality, desirability and ultimately consumption of such foods. This paper describes recent changes in Asian food systems driven by TFBCs in the retail, manufacturing and food service sectors and considers the implications for population nutrition. Market data for each sector was sourced from Euromonitor International for four lower-middle income, three upper-middle income and five high-income Asian countries. Descriptive statistics were used to describe trends in ultra-processed food consumption (2000-2013), packaged food retail distribution channels (1999-2013), 'market transnationalization' defined as the market share held by TFBCs relative to domestic firms (2004-2013), and 'market concentration' defined as the market share and thus market power held by the four leading firms (2004-2013) in each market. Ultra-processed food sales has increased rapidly in most middle-income countries. Carbonated soft drinks was the leading product category, in which Coca-Cola and PepsiCo had a regional oligopoly. Supermarkets, hypermarkets and convenience stores were becoming increasingly dominant as distribution channels for packaged foods throughout the region. Market concentration was increasing in the grocery retail sector in all countries. Food service sales are increasing in all countries led by McDonalds and Yum! Brands. However, in all three sectors TFBCs face strong competition from Asian firms. Overall, the findings suggest that market forces are likely to be significant but variable drivers of Asia

  4. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  5. A reliable sewage quality abnormal event monitoring system.

    PubMed

    Li, Tianling; Winnel, Melissa; Lin, Hao; Panther, Jared; Liu, Chang; O'Halloran, Roger; Wang, Kewen; An, Taicheng; Wong, Po Keung; Zhang, Shanqing; Zhao, Huijun

    2017-09-15

    With closing water loop through purified recycled water, wastewater becomes a part of source water, requiring reliable wastewater quality monitoring system (WQMS) to manage wastewater source and mitigate potential health risks. However, the development of reliable WQMS is fatally constrained by severe contamination and biofouling of sensors due to the hostile analytical environment of wastewaters, especially raw sewages, that challenges the limit of existing sensing technologies. In this work, we report a technological solution to enable the development of WQMS for real-time abnormal event detection with high reliability and practicality. A vectored high flow hydrodynamic self-cleaning approach and a dual-sensor self-diagnostic concept are adopted for WQMS to effectively encounter vital sensor failing issues caused by contamination and biofouling and ensure the integrity of sensing data. The performance of the WQMS has been evaluated over a 3-year trial period at different sewage catchment sites across three Australian states. It has demonstrated that the developed WQMS is capable of continuously operating in raw sewage for a prolonged period up to 24 months without maintenance and failure, signifying the high reliability and practicality. The demonstrated WQMS capability to reliably acquire real-time wastewater quality information leaps forward the development of effective wastewater source management system. The reported self-cleaning and self-diagnostic concepts should be applicable to other online water quality monitoring systems, opening a new way to encounter the common reliability and stability issues caused by sensor contamination and biofouling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. "Reliability Of Fiber Optic Lans"

    NASA Astrophysics Data System (ADS)

    Code n, Michael; Scholl, Frederick; Hatfield, W. Bryan

    1987-02-01

    Fiber optic Local Area Network Systems are being used to interconnect increasing numbers of nodes. These nodes may include office computer peripherals and terminals, PBX switches, process control equipment and sensors, automated machine tools and robots, and military telemetry and communications equipment. The extensive shared base of capital resources in each system requires that the fiber optic LAN meet stringent reliability and maintainability requirements. These requirements are met by proper system design and by suitable manufacturing and quality procedures at all levels of a vertically integrated manufacturing operation. We will describe the reliability and maintainability of Codenoll's passive star based systems. These include LAN systems compatible with Ethernet (IEEE 802.3) and MAP (IEEE 802.4), and software compatible with IBM Token Ring (IEEE 802.5). No single point of failure exists in this system architecture.

  7. Ultra-Wideband Tracking System Design for Relative Navigation

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, Dickey; Bgo, Phong; Dekome, Kent; Dusl, John

    2011-01-01

    This presentation briefly discusses a design effort for a prototype ultra-wideband (UWB) time-difference-of-arrival (TDOA) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being designed for use in localization and navigation of a rover in a GPS deprived environment for surface missions. In one application enabled by the UWB tracking, a robotic vehicle carrying equipments can autonomously follow a crewed rover from work site to work site such that resources can be carried from one landing mission to the next thereby saving up-mass. The UWB Systems Group at JSC has developed a UWB TDOA High Resolution Proximity Tracking System which can achieve sub-inch tracking accuracy of a target within the radius of the tracking baseline [1]. By extending the tracking capability beyond the radius of the tracking baseline, a tracking system is being designed to enable relative navigation between two vehicles for surface missions. A prototype UWB TDOA tracking system has been designed, implemented, tested, and proven feasible for relative navigation of robotic vehicles. Future work includes testing the system with the application code to increase the tracking update rate and evaluating the linear tracking baseline to improve the flexibility of antenna mounting on the following vehicle.

  8. Application of Magnetic Suspension and Balance Systems to Ultra-High Reynolds Number Facilities

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1996-01-01

    The current status of wind tunnel magnetic suspension and balance system development is briefly reviewed. Technical work currently underway at NASA Langley Research Center is detailed, where it relates to the ultra-high Reynolds number application. The application itself is addressed, concluded to be quite feasible, and broad design recommendations given.

  9. Rhabdomyolysis and exercise-associated hyponatremia in ultra-bikers and ultra-runners.

    PubMed

    Chlíbková, Daniela; Knechtle, Beat; Rosemann, Thomas; Tomášková, Ivana; Novotný, Jan; Žákovská, Alena; Uher, Tomáš

    2015-01-01

    Exercise-associated hyponatremia (EAH), rhabdomyolysis and renal failure appear to be a unique problem in ultra-endurance racers. We investigated the combined occurrence of EAH and rhabdomyolysis in seven different ultra-endurance races and disciplines (i.e. multi-stage mountain biking, 24-h mountain biking, 24-h ultra-running and 100-km ultra-running). Two (15.4%) ultra-runners (man and woman) from hyponatremic ultra-athletes (n = 13) and four (4%) ultra-runners (four men) from the normonatremic group (n = 100) showed rhabdomyolysis following elevated blood creatine kinase (CK) levels > 10,000 U/L without the development of renal failure and the necessity of a medical treatment. Post-race creatine kinase, plasma and urine creatinine significantly increased, while plasma [Na(+)] and creatine clearance decreased in hyponatremic and normonatremic athletes, respectively. The percentage increase of CK was higher in the hyponatremic compared to the normonatremic group (P < 0.05). Post-race CK levels were higher in ultra-runners compared to mountain bikers (P < 0.01), in faster normonatremic (P < 0.05) and older and more experienced hyponatremic ultra-athletes (P < 0.05). In all finishers, pre-race plasma [K(+)] was related to post-race CK (P < 0.05). Hyponatremic ultra-athletes tended to develop exercise-induced rhabdomyolysis more frequently than normonatremic ultra-athletes. Ultra-runners tended to develop rhabdomyolysis more frequently than mountain bikers. We found no association between post-race plasma [Na(+)] and CK concentration in both hypo- and normonatremic ultra-athletes.

  10. Circuit design advances for ultra-low power sensing platforms

    NASA Astrophysics Data System (ADS)

    Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis

    2010-04-01

    This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.

  11. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  12. Making real-time reactive systems reliable

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark

    1990-01-01

    A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.

  13. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  14. Distributed acoustic sensing system based on continuous wide-band ultra-weak fiber Bragg grating array

    NASA Astrophysics Data System (ADS)

    Tang, Jianguan; Li, Liang; Guo, Huiyong; Yu, Haihu; Wen, Hongqiao; Yang, Minghong

    2017-04-01

    A distributed acoustic sensing system (DAS) with low-coherence ASE and Michelson interferometer based on continuous width-band ultra-weak fiber Bragg grating (UW-FBG) array is proposed and experimentally demonstrated. The experimental result shows that the proposed system has better performance in detecting acoustic waves than the conventional hydrophone.

  15. System reliability analysis through corona testing

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.

    1975-01-01

    A corona vacuum test facility for nondestructive testing of power system components was built in the Reliability and Quality Engineering Test Laboratories at the NASA Lewis Research Center. The facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. The facility is being used to test various high-voltage power system components.

  16. Large-scale systems: Complexity, stability, reliability

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1975-01-01

    After showing that a complex dynamic system with a competitive structure has highly reliable stability, a class of noncompetitive dynamic systems for which competitive models can be constructed is defined. It is shown that such a construction is possible in the context of the hierarchic stability analysis. The scheme is based on the comparison principle and vector Liapunov functions.

  17. 78 FR 73112 - Monitoring System Conditions-Transmission Operations Reliability Standards; Interconnection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ..., RM13-14-000 and RM13-15-000] Monitoring System Conditions--Transmission Operations Reliability...) 502-6817, [email protected] . Robert T. Stroh (Legal Information), Office of the General... Reliability Standards ``address the important reliability goal of ensuring that the transmission system is...

  18. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  19. Computing Lives And Reliabilities Of Turboprop Transmissions

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.

    1991-01-01

    Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).

  20. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  1. Future ultra-speed tube-flight

    NASA Astrophysics Data System (ADS)

    Salter, Robert M.

    1994-05-01

    Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.

  2. Future ultra-speed tube-flight

    NASA Technical Reports Server (NTRS)

    Salter, Robert M.

    1994-01-01

    Future long-link, ultra-speed, surface transport systems will require electromagnetically (EM) driven and restrained vehicles operating under reduced-atmosphere in very straight tubes. Such tube-flight trains will be safe, energy conservative, pollution-free, and in a protected environment. Hypersonic (and even hyperballistic) speeds are theoretically achievable. Ultimate system choices will represent tradeoffs between amoritized capital costs (ACC) and operating costs. For example, long coasting links might employ aerodynamic lift coupled with EM restraint and drag make-up. Optimized, combined EM lift, and thrust vectors could reduce energy costs but at increased ACC. (Repulsive levitation can produce lift-over-drag l/d ratios a decade greater than aerodynamic), Alternatively, vehicle-emanated, induced-mirror fields in a conducting (aluminum sheet) road bed could reduce ACC but at substantial energy costs. Ultra-speed tube flight will demand fast-acting, high-precision sensors and computerized magnetic shimming. This same control system can maintain a magnetic 'guide way' invariant in inertial space with inertial detectors imbedded in tube structures to sense and correct for earth tremors. Ultra-speed tube flight can complete with aircraft for transit time and can provide even greater passenger convenience by single-model connections with local subways and feeder lines. Although cargo transport generally will not need to be performed at ultra speeds, such speeds may well be desirable for high throughput to optimize channel costs. Thus, a large and expensive pipeline might be replaced with small EM-driven pallets at high speeds.

  3. Computer-assisted audiovisual health history self-interviewing. Results of the pilot study of the Hoxworth Quality Donor System.

    PubMed

    Zuck, T F; Cumming, P D; Wallace, E L

    2001-12-01

    The safety of blood for transfusion depends, in part, on the reliability of the health history given by volunteer blood donors. To improve reliability, a pilot study evaluated the use of an interactive computer-based audiovisual donor interviewing system at a typical midwestern blood center in the United States. An interactive video screening system was tested in a community donor center environment on 395 volunteer blood donors. Of the donors using the system, 277 completed surveys regarding their acceptance of and opinions about the system. The study showed that an interactive computer-based audiovisual donor screening system was an effective means of conducting the donor health history. The majority of donors found the system understandable and favored the system over a face-to-face interview. Further, most donors indicated that they would be more likely to return if they were to be screened by such a system. Interactive computer-based audiovisual blood donor screening is useful and well accepted by donors; it may prevent a majority of errors and accidents that are reportable to the FDA; and it may contribute to increased safety and availability of the blood supply.

  4. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  5. Reliable, Memory Speed Storage for Cluster Computing Frameworks

    DTIC Science & Technology

    2014-06-16

    specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive

  6. The reliability-quality relationship for quality systems and quality risk management.

    PubMed

    Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M

    2012-01-01

    Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality

  7. System reliability analysis through corona testing

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.

    1975-01-01

    In the Reliability and Quality Engineering Test Laboratory at the NASA Lewis Research Center a nondestructive, corona-vacuum test facility for testing power system components was developed using commercially available hardware. The test facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. This facility is being used to test various high voltage power system components.

  8. Note: A simple sample transfer alignment for ultra-high vacuum systems.

    PubMed

    Tamtögl, A; Carter, E A; Ward, D J; Avidor, N; Kole, P R; Jardine, A P; Allison, W

    2016-06-01

    The alignment of ultra-high-vacuum sample transfer systems can be problematic when there is no direct line of sight to assist the user. We present the design of a simple and cheap system which greatly simplifies the alignment of sample transfer devices. Our method is based on the adaptation of a commercial digital camera which provides live views from within the vacuum chamber. The images of the camera are further processed using an image recognition and processing code which determines any misalignments and reports them to the user. Installation has proven to be extremely useful in order to align the sample with respect to the transfer mechanism. Furthermore, the alignment software can be easily adapted for other systems.

  9. 78 FR 77574 - Protection System Maintenance Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... protection system component type, except that the maintenance program for all batteries associated with the... Electric System reliability and promoting efficiency through consolidation [of protection system-related... ITC that PRC-005-2 promotes efficiency by consolidating protection system maintenance requirements...

  10. Demonstration Of Ultra HI-FI (UHF) Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2004-01-01

    Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.

  11. Ultra-fast Object Recognition from Few Spikes

    DTIC Science & Technology

    2005-07-06

    Computer Science and Artificial Intelligence Laboratory Ultra-fast Object Recognition from Few Spikes Chou Hung, Gabriel Kreiman , Tomaso Poggio...neural code for different kinds of object-related information. *The authors, Chou Hung and Gabriel Kreiman , contributed equally to this work...Supplementary Material is available at http://ramonycajal.mit.edu/ kreiman /resources/ultrafast

  12. Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe

    NASA Astrophysics Data System (ADS)

    Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.

    1987-01-01

    Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.

  13. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    NASA Astrophysics Data System (ADS)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  14. Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth

    2016-01-01

    If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.

  15. A computational framework for prime implicants identification in noncoherent dynamic systems.

    PubMed

    Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico

    2015-01-01

    Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.

  16. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  17. 76 FR 16277 - System Restoration Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-23

    ... system restoration process. The Commission also approves the NERC's proposal to retire four existing EOP... prepare personnel to enable effective coordination of the system restoration process. The Commission also..., through the Reliability Standard development process, a modification to EOP-005-1 that identifies time...

  18. ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.

    DTIC Science & Technology

    require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can

  19. Reliability systems for implantable cardiac defibrillator batteries

    NASA Astrophysics Data System (ADS)

    Takeuchi, Esther S.

    The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.

  20. High-performance radial AMTEC cell design for ultra-high-power solar AMTEC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, T.J.; Huang, C.

    1999-07-01

    Alkali Metal Thermal to Electric Conversion (AMTEC) technology is rapidly maturing for potential application in ultra-high-power solar AMTEC systems required by potential future US Air Force (USAF) spacecraft missions in medium-earth and geosynchronous orbits (MEO and GEO). Solar thermal AMTEC power systems potentially have several important advantages over current solar photovoltaic power systems in ultra-high-power spacecraft applications for USAF MEO and GEO missions. This work presents key aspects of radial AMTEC cell design to achieve high cell performance in solar AMTEC systems delivering larger than 50 kW(e) to support high power USAF missions. These missions typically require AMTEC cell conversionmore » efficiency larger than 25%. A sophisticated design parameter methodology is described and demonstrated which establishes optimum design parameters in any radial cell design to satisfy high-power mission requirements. Specific relationships, which are distinct functions of cell temperatures and pressures, define critical dependencies between key cell design parameters, particularly the impact of parasitic thermal losses on Beta Alumina Solid Electrolyte (BASE) area requirements, voltage, number of BASE tubes, and system power production for both maximum power-per-BASE-area and optimum efficiency conditions. Finally, some high-level system tradeoffs are demonstrated using the design parameter methodology to establish high-power radial cell design requirements and philosophy. The discussion highlights how to incorporate this methodology with sophisticated SINDA/FLUINT AMTEC cell modeling capabilities to determine optimum radial AMTEC cell designs.« less

  1. Systems and methods for advanced ultra-high-performance InP solar cells

    DOEpatents

    Wanlass, Mark

    2017-03-07

    Systems and Methods for Advanced Ultra-High-Performance InP Solar Cells are provided. In one embodiment, an InP photovoltaic device comprises: a p-n junction absorber layer comprising at least one InP layer; a front surface confinement layer; and a back surface confinement layer; wherein either the front surface confinement layer or the back surface confinement layer forms part of a High-Low (HL) doping architecture; and wherein either the front surface confinement layer or the back surface confinement layer forms part of a heterointerface system architecture.

  2. Design of ultra high performance concrete as an overlay in pavements and bridge decks.

    DOT National Transportation Integrated Search

    2014-08-01

    The main objective of this research was to develop ultra-high performance concrete (UHPC) as a reliable, economic, low carbon foot : print and durable concrete overlay material that can offer shorter traffic closures due to faster construction. The U...

  3. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  4. Programmable Ultra-Lightweight System Adaptable Radio

    NASA Technical Reports Server (NTRS)

    Werkheiser, Arthur

    2015-01-01

    The programmable ultra-lightweight system adaptable radio (PULSAR) is a NASA Marshall Space Flight Center transceiver designed for the CubeSat market, but has the potential for other markets. The PULSAR project aims to reduce size, weight, and power while increasing telemetry data rate. The current version of the PULSAR has a mass of 2.2 kg and a footprint of 10.8 cm2. The height depends on the specific configuration. The PULSAR S-Band Communications Subsystem is an S- and X-band transponder system comprised of a receiver/detector (receiver) element, a transmitter element(s), and related power distribution, command, control, and telemetry element for operation and information interfaces. It is capable of receiving commands, encoding and transmitting telemetry, as well as providing tracking data in a manner compatible with Earthbased ground stations, near Earth network, and deep space network station resources. The software-defined radio's (SDR's) data format characteristics can be defined and reconfigured during spaceflight or prior to launch. The PULSAR team continues to evolve the SDR to improve the performance and form factor to meet the requirements that the CubeSat market space requires. One of the unique features is that the actual radio design can change (somewhat), but not require any hardware modifications due to the use of field programmable gate arrays.

  5. Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System

    PubMed Central

    Fox, Mark; Papadopoulos, Amy; Crump, Cindy

    2013-01-01

    Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640

  6. Combinatorial Reliability and Repair

    DTIC Science & Technology

    1992-07-01

    Press, Oxford, 1987. [2] G. Gordon and L. Traldi, Generalized activities and the Tutte polynomial, Discrete Math . 85 (1990), 167-176. [3] A. B. Huseby, A...Chromatic polynomials and network reliability, Discrete Math . 67 (1987), 57-79. [7] A. Satayanarayana and R. K. Wood, A linear-time algorithm for comput- ing...K-terminal reliability in series-parallel networks, SIAM J. Comput. 14 (1985), 818-832. [8] L. Traldi, Generalized activities and K-terminal reliability, Discrete Math . 96 (1991), 131-149. 4

  7. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  8. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  9. Robustness Analysis and Reliable Flight Regime Estimation of an Integrated Resilent Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2008-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.

  10. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Tutorial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. L. Smith; S. T. Beck; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). This volume is the tutorial manual for the SAPHIRE system. In this document, a series of lessons are provided that guide the user through basic steps common to most analyses preformed with SAPHIRE. The tutorial is divided into two major sections covering both basic and advanced features. The section covering basic topics contains lessons that lead the reader through development of a probabilistic hypothetical problem involving a vehicle accident, highlighting the program’smore » most fundamental features. The advanced features section contains additional lessons that expand on fundamental analysis features of SAPHIRE and provide insights into more complex analysis techniques. Together, these two elements provide an overview into the operation and capabilities of the SAPHIRE software.« less

  11. UltraSail CubeSat Solar Sail Flight Experiment

    NASA Technical Reports Server (NTRS)

    Carroll, David; Burton, Rodney; Coverstone, Victoria; Swenson, Gary

    2013-01-01

    UltraSail is a next-generation, highrisk, high-payoff sail system for the launch, deployment, stabilization, and control of very large (km2 class) solar sails enabling high payload mass fractions for interplanetary and deep space spacecraft. UltraSail is a non-traditional approach to propulsion technology achieved by combining propulsion and control systems developed for formation- flying microsatellites with an innovative solar sail architecture to achieve controllable sail areas approaching 1 km2, sail subsystem area densities approaching 1 g/m2, and thrust levels many times those of ion thrusters used for comparable deep space missions. UltraSail can achieve outer planetary rendezvous, a deep-space capability now reserved for high-mass nuclear and chemical systems. There is a twofold rationale behind the UltraSail concept for advanced solar sail systems. The first is that sail-andboom systems are inherently size-limited. The boom mass must be kept small, and column buckling limits the boom length to a few hundred meters. By eliminating the boom, UltraSail not only offers larger sail area, but also lower areal density, allowing larger payloads and shorter mission transit times. The second rationale for UltraSail is that sail films present deployment handling difficulties as the film thickness approaches one micrometer. The square sail requires that the film be folded in two directions for launch, and similarly unfolded for deployment. The film is stressed at the intersection of two folds, and this stress varies inversely with the film thickness. This stress can cause the film to yield, forming a permanent crease, or worse, to perforate. By rolling the film as UltraSail does, creases are prevented. Because the film is so thin, the roll thickness is small. Dynamic structural analysis of UltraSail coupled with dynamic control analysis shows that the system can be designed to eliminate longitudinal torsional waves created while controlling the pitch of the blades

  12. A Manganin Thin Film Ultra-High Pressure Sensor for Microscale Detonation Pressure Measurement

    PubMed Central

    Zhang, Guodong; Zhao, Yulong; Zhao, Yun; Wang, Xinchen; Ren, Wei; Li, Hui; Zhao, You

    2018-01-01

    With the development of energetic materials (EMs) and microelectromechanical systems (MEMS) initiating explosive devices, the measurement of detonation pressure generated by EMs in the microscale has become a pressing need. This paper develops a manganin thin film ultra-high pressure sensor based on MEMS technology for measuring the output pressure from micro-detonator. A reliable coefficient is proposed for designing the sensor’s sensitive element better. The sensor employs sandwich structure: the substrate uses a 0.5 mm thick alumina ceramic, the manganin sensitive element with a size of 0.2 mm × 0.1 mm × 2 μm and copper electrodes of 2 μm thick are sputtered sequentially on the substrate, and a 25 μm thick insulating layer of polyimide is wrapped on the sensitive element. The static test shows that the piezoresistive coefficient of manganin thin film is 0.0125 GPa−1. The dynamic experiment indicates that the detonation pressure of micro-detonator is 12.66 GPa, and the response time of the sensor is 37 ns. In a word, the sensor developed in this study is suitable for measuring ultra-high pressure in microscale and has a shorter response time than that of foil-like manganin gauges. Simultaneously, this study could be beneficial to research on ultra-high-pressure sensors with smaller size. PMID:29494519

  13. Training and Maintaining System-Wide Reliability in Outcome Management.

    PubMed

    Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E

    2014-01-01

    The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.

  14. Ultra-low field MRI food inspection system prototype

    NASA Astrophysics Data System (ADS)

    Kawagoe, Satoshi; Toyota, Hirotomo; Hatta, Junichi; Ariyoshi, Seiichiro; Tanaka, Saburo

    2016-11-01

    We develop an ultra-low field (ULF) magnetic resonance imaging (MRI) system using a high-temperature superconducting quantum interference device (HTS-SQUID) for food inspection. A two-dimensional (2D)-MR image is reconstructed from the grid processing raw data using the 2D fast Fourier transform method. In a previous study, we combined an LC resonator with the ULF-MRI system to improve the detection area of the HTS-SQUID. The sensitivity was improved, but since the experiments were performed in a semi-open magnetically shielded room (MSR), external noise was a problem. In this study, we develop a compact magnetically shielded box (CMSB), which has a small open window for transfer of a pre-polarized sample. Experiments were performed in the CMSB and 2D-MR images were compared with images taken in the semi-open MSR. A clear image of a disk-shaped water sample is obtained, with an outer dimension closer to that of the real sample than in the image taken in the semi-open MSR. Furthermore, the 2D-MR image of a multiple cell water sample is clearly reconstructed. These results show the applicability of the ULF-MRI system in food inspection.

  15. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  16. Analysis article on the performance analysis of the OneTouch UltraVue blood glucose monitoring system.

    PubMed

    Solnica, Bogdan

    2009-09-01

    In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch UltraVue blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. 2009 Diabetes Technology Society.

  17. Experimental and computational investigation of Morse taper conometric system reliability for the definition of fixed connections between dental implants and prostheses.

    PubMed

    Bressan, Eriberto; Lops, Diego; Tomasi, Cristiano; Ricci, Sara; Stocchero, Michele; Carniel, Emanuele Luigi

    2014-07-01

    Nowadays, dental implantology is a reliable technique for treatment of partially and completely edentulous patients. The achievement of stable dentition is ensured by implant-supported fixed dental prostheses. Morse taper conometric system may provide fixed retention between implants and dental prostheses. The aim of this study was to investigate retentive performance and mechanical strength of a Morse taper conometric system used as implant-supported fixed dental prostheses retention. Experimental and finite element investigations were performed. Experimental tests were achieved on a specific abutment-coping system, accounting for both cemented and non-cemented situations. The results from the experimental activities were processed to identify the mechanical behavior of the coping-abutment interface. Finally, the achieved information was applied to develop reliable finite element models of different abutment-coping systems. The analyses were developed accounting for different geometrical conformations of the abutment-coping system, such as different taper angle. The results showed that activation process, occurred through a suitable insertion force, could provide retentive performances equal to a cemented system without compromising the mechanical functionality of the system. These findings suggest that Morse taper conometrical system can provide a fixed connection between implants and dental prostheses if proper insertion force is applied. Activation process does not compromise the mechanical functionality of the system. © IMechE 2014.

  18. A System for Integrated Reliability and Safety Analyses

    NASA Technical Reports Server (NTRS)

    Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles

    1999-01-01

    We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.

  19. Programmable Ultra-Lightweight System Adaptable Radio Satellite Base Station

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta; Sims, Herb

    2015-01-01

    With the explosion of the CubeSat, small sat, and nanosat markets, the need for a robust, highly capable, yet affordable satellite base station, capable of telemetry capture and relay, is significant. The Programmable Ultra-Lightweight System Adaptable Radio (PULSAR) is NASA Marshall Space Flight Center's (MSFC's) software-defined digital radio, developed with previous Technology Investment Programs and Technology Transfer Office resources. The current PULSAR will have achieved a Technology Readiness Level-6 by the end of FY 2014. The extensibility of the PULSAR will allow it to be adapted to perform the tasks of a mobile base station capable of commanding, receiving, and processing satellite, rover, or planetary probe data streams with an appropriate antenna.

  20. Ultra Small Aperture Terminal: System Design and Test Results

    NASA Technical Reports Server (NTRS)

    Sohn, Philip Y.; Reinhart, Richard C.

    1996-01-01

    The Ultra Small Aperture Terminal (USAT) has been developed to test and demonstrate remote and broadcast satcom applications via the Advanced Communications Technology Satellite (ACTS). The design of these ground stations emphasize small size, low power consumption, portable and rugged terminals. Each ground station includes several custom design parts such as 35 cm diameter antenna, 1/4 Watt transmitter with built-in upconverter, and 4.0 dB Noise Figure (NF) receiver with built-in downconverter. In addition, state-of-the-art commercial parts such as highly stable ovenized crystal oscillators and dielectric resonator oscillators are used in the ground station design. Presented in this paper are system level design description, performance, and sample applications.

  1. Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability

    NASA Technical Reports Server (NTRS)

    Kolodziej, P.; Rasky, D. J.

    2002-01-01

    In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.

  2. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  3. Reliability and failure modes of narrow implant systems.

    PubMed

    Hirata, Ronaldo; Bonfante, Estevam A; Anchieta, Rodolfo B; Machado, Lucas S; Freitas, Gileade; Fardin, Vinicius P; Tovar, Nick; Coelho, Paulo G

    2016-09-01

    Narrow implants are indicated in areas of limited bone width or when grafting is nonviable. However, the reduction of implant diameter may compromise their performance. This study evaluated the reliability of several narrow implant systems under fatigue, after restored with single-unit crowns. Narrow implant systems were divided (n = 18 each), as follows: Astra (ASC); BioHorizons (BSC); Straumann Roxolid (SNC), Intra-Lock (IMC), and Intra-Lock one-piece abutment (ILO). Maxillary central incisor crowns were cemented and subjected to step-stress accelerated life testing in water. Use level probability Weibull curves and reliability for a mission of 100,000 cycles at 130- and 180-N loads (90 % two-sided confidence intervals) were calculated. Scanning electron microscopy was used for fractography. Reliability for 100,000 cycles at 130 N was ∼99 % in group ASC, ∼99 % in BSC, ∼96 % in SNC, ∼99 % in IMC, and ∼100 % in ILO. At 180 N, reliability of ∼34 % resulted for the ASC group, ∼91 % for BSC, ∼53 % for SNC, ∼70 % for IMC, and ∼99 % for ILO. Abutment screw fracture was the main failure mode for all groups. Reliability was not different between systems for 100,000 cycles at the 130-N load. A significant decrease was observed at the 180-N load for ASC, SNC, and IMC, whereas it was maintained for BSC and ILO. The investigated narrow implants presented mechanical performance under fatigue that suggests their safe use as single crowns in the anterior region.

  4. Computational Systems Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Samudrala, Ram; Bumgarner, Roger E.

    2009-05-01

    Computational systems biology is the term that we use to describe computational methods to identify, infer, model, and store relationships between the molecules, pathways, and cells (“systems”) involved in a living organism. Based on this definition, the field of computational systems biology has been in existence for some time. However, the recent confluence of high throughput methodology for biological data gathering, genome-scale sequencing and computational processing power has driven a reinvention and expansion of this field. The expansions include not only modeling of small metabolic{Ishii, 2004 #1129; Ekins, 2006 #1601; Lafaye, 2005 #1744} and signaling systems{Stevenson-Paulik, 2006 #1742; Lafaye, 2005more » #1744} but also modeling of the relationships between biological components in very large systems, incluyding whole cells and organisms {Ideker, 2001 #1124; Pe'er, 2001 #1172; Pilpel, 2001 #393; Ideker, 2002 #327; Kelley, 2003 #1117; Shannon, 2003 #1116; Ideker, 2004 #1111}{Schadt, 2003 #475; Schadt, 2006 #1661}{McDermott, 2002 #878; McDermott, 2005 #1271}. Generally these models provide a general overview of one or more aspects of these systems and leave the determination of details to experimentalists focused on smaller subsystems. The promise of such approaches is that they will elucidate patterns, relationships and general features that are not evident from examining specific components or subsystems. These predictions are either interesting in and of themselves (for example, the identification of an evolutionary pattern), or are interesting and valuable to researchers working on a particular problem (for example highlight a previously unknown functional pathway). Two events have occurred to bring about the field computational systems biology to the forefront. One is the advent of high throughput methods that have generated large amounts of information about particular systems in the form of genetic studies, gene expression analyses (both

  5. Renewal of the Control System and Reliable Long Term Operation of the LHD Cryogenic System

    NASA Astrophysics Data System (ADS)

    Mito, T.; Iwamoto, A.; Oba, K.; Takami, S.; Moriuchi, S.; Imagawa, S.; Takahata, K.; Yamada, S.; Yanagi, N.; Hamaguchi, S.; Kishida, F.; Nakashima, T.

    The Large Helical Device (LHD) is a heliotron-type fusion plasma experimental machine which consists of a fully superconducting magnet system cooled by a helium refrigerator having a total equivalent cooling capacity of 9.2 kW@4.4 K. Seventeenplasma experimental campaigns have been performed successfully since1997 with high reliability of 99%. However, sixteen years have passed from the beginning of the system operation. Improvements are being implementedto prevent serious failures and to pursue further reliability.The LHD cryogenic control system was designed and developed as an open system utilizing latest control equipment of VME controllers and UNIX workstations at the construction time. Howeverthe generation change of control equipment has been advanced. Down-sizing of control deviceshas beenplanned from VME controllers to compact PCI controllers in order to simplify the system configuration and to improve the system reliability. The new system is composed of compact PCI controller and remote I/O connected with EtherNet/IP. Making the system redundant becomes possible by doubling CPU, LAN, and remote I/O respectively. The smooth renewal of the LHD cryogenic controlsystem and the further improvement of the cryogenic system reliability are reported.

  6. System principles, mathematical models and methods to ensure high reliability of safety systems

    NASA Astrophysics Data System (ADS)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  7. Second order nonlinear QED processes in ultra-strong laser fields

    NASA Astrophysics Data System (ADS)

    Mackenroth, Felix

    2017-10-01

    In the interaction of ultra-intense laser fields with matter the ever increasing peak laser intensities render nonlinear QED effects ever more important. For long, ultra-intense laser pulses scattering large systems, like a macroscopic plasma, the interaction time can be longer than the scattering time, leading to multiple scatterings. These are usually approximated as incoherent cascades of single-vertex processes. Under certain conditions, however, this common cascade approximation may be insufficient, as it disregards several effects such as coherent processes, quantum interferences or pulse shape effects. Quantifying deviations of the full amplitude of multiple scatterings from the commonly employed cascade approximations is a formidable, yet unaccomplished task. In this talk we are going to discuss how to compute second order nonlinear QED amplitudes and relate them to the conventional cascade approximation. We present examples for typical second order processes and benchmark the full result against common approximations. We demonstrate that the approximation of multiple nonlinear QED scatterings as a cascade of single interactions has certain limitations and discuss these limits in light of upcoming experimental tests.

  8. A reliable data collection/control system

    NASA Technical Reports Server (NTRS)

    Maughan, Thom

    1988-01-01

    The Cal Poly Space Project requires a data collection/control system which must be able to reliably record temperature, pressure and vibration data. It must also schedule the 16 electroplating and 2 immiscible alloy experiments so as to optimize use of the batteries, maintain a safe package temperature profile, and run the experiment during conditions of microgravity (and minimum vibration). This system must operate unattended in the harsh environment of space and consume very little power due to limited battery supply. The design of a system which meets these requirements is addressed.

  9. Model of load balancing using reliable algorithm with multi-agent system

    NASA Astrophysics Data System (ADS)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  10. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  11. Review on the progress of ultra-precision machining technologies

    NASA Astrophysics Data System (ADS)

    Yuan, Julong; Lyu, Binghai; Hang, Wei; Deng, Qianfa

    2017-06-01

    Ultra-precision machining technologies are the essential methods, to obtain the highest form accuracy and surface quality. As more research findings are published, such technologies now involve complicated systems engineering and been widely used in the production of components in various aerospace, national defense, optics, mechanics, electronics, and other high-tech applications. The conception, applications and history of ultra-precision machining are introduced in this article, and the developments of ultra-precision machining technologies, especially ultra-precision grinding, ultra-precision cutting and polishing are also reviewed. The current state and problems of this field in China are analyzed. Finally, the development trends of this field and the coping strategies employed in China to keep up with the trends are discussed.

  12. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  13. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  14. The active modulation of drug release by an ionic field effect transistor for an ultra-low power implantable nanofluidic system.

    PubMed

    Bruno, Giacomo; Canavese, Giancarlo; Liu, Xuewu; Filgueira, Carly S; Sacco, Adriano; Demarchi, Danilo; Ferrari, Mauro; Grattoni, Alessandro

    2016-11-10

    We report an electro-nanofluidic membrane for tunable, ultra-low power drug delivery employing an ionic field effect transistor. Therapeutic release from a drug reservoir was successfully modulated, with high energy efficiency, by actively adjusting the surface charge of slit-nanochannels 50, 110, and 160 nm in size, by the polarization of a buried gate electrode and the consequent variation of the electrical double layer in the nanochannel. We demonstrated control over the transport of ionic species, including two relevant hypertension drugs, atenolol and perindopril, that could benefit from such modulation. By leveraging concentration-driven diffusion, we achieve a 2 to 3 order of magnitude reduction in power consumption as compared to other electrokinetic phenomena. The application of a small gate potential (±5 V) in close proximity (150 nm) of 50 nm nanochannels generated a sufficiently strong electric field, which doubled or blocked the ionic flux depending on the polarity of the voltage applied. These compelling findings can lead to next generation, more reliable, smaller, and longer lasting drug delivery implants with ultra-low power consumption.

  15. Validation and Improvement of Reliability Methods for Air Force Building Systems

    DTIC Science & Technology

    focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that

  16. An approximation formula for a class of fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1986-01-01

    An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.

  17. Ultra-high Temperature Emittance Measurements for Space and Missile Applications

    NASA Technical Reports Server (NTRS)

    Rogers, Jan; Crandall, David

    2009-01-01

    Advanced modeling and design efforts for many aerospace components require high temperature emittance data. Applications requiring emittance data include propulsion systems, radiators, aeroshells, heatshields/thermal protection systems, and leading edge surfaces. The objective of this work is to provide emittance data at ultra-high temperatures. MSFC has a new instrument for the measurement of emittance at ultra-high temperatures, the Ultra-High Temperature Emissometer System (Ultra-HITEMS). AZ Technology Inc. developed the instrument, designed to provide emittance measurements over the temperature range 700-3500K. The Ultra-HITEMS instrument measures the emittance of samples, heated by lasers, in vacuum, using a blackbody source and a Fourier Transform Spectrometer. Detectors in a Nicolet 6700 FT-IR spectrometer measure emittance over the spectral range of 0.4-25 microns. Emitted energy from the specimen and output from a Mikron M390S blackbody source at the same temperature with matched collection geometry are measured. Integrating emittance over the spectral range yields the total emittance. The ratio provides a direct measure of total hemispherical emittance. Samples are heated using lasers. Optical pyrometry provides temperature data. Optical filters prevent interference from the heating lasers. Data for Inconel 718 show excellent agreement with results from literature and ASTM 835. Measurements taken from levitated spherical specimens provide total hemispherical emittance data; measurements taken from flat specimens mounted in the chamber provide near-normal emittance data. Data from selected characterization studies will be presented. The Ultra-HITEMS technique could advance space and missile technologies by advancing the knowledge base and the technology readiness level for ultra-high temperature materials.

  18. Assessment of lumbosacral kyphosis in spondylolisthesis: a computer-assisted reliability study of six measurement techniques

    PubMed Central

    Glavas, Panagiotis; Mac-Thiong, Jean-Marc; Parent, Stefan; de Guise, Jacques A.

    2008-01-01

    Although recognized as an important aspect in the management of spondylolisthesis, there is no consensus on the most reliable and optimal measure of lumbosacral kyphosis (LSK). Using a custom computer software, four raters evaluated 60 standing lateral radiographs of the lumbosacral spine during two sessions at a 1-week interval. The sample size consisted of 20 normal, 20 low and 20 high grade spondylolisthetic subjects. Six parameters were included for analysis: Boxall’s slip angle, Dubousset’s lumbosacral angle (LSA), the Spinal Deformity Study Group’s (SDSG) LSA, dysplastic SDSG LSA, sagittal rotation (SR), kyphotic Cobb angle (k-Cobb). Intra- and inter-rater reliability for all parameters was assessed using intra-class correlation coefficients (ICC). Correlations between parameters and slip percentage were evaluated with Pearson coefficients. The intra-rater ICC’s for all the parameters ranged between 0.81 and 0.97 and the inter-rater ICC’s were between 0.74 and 0.98. All parameters except sagittal rotation showed a medium to large correlation with slip percentage. Dubousset’s LSA and the k-Cobb showed the largest correlations (r = −0.78 and r = −0.50, respectively). SR was associated with the weakest correlation (r = −0.10). All other parameters had medium correlations with percent slip (r = 0.31–0.43). All measurement techniques provided excellent inter- and intra-rater reliability. Dubousset’s LSA showed the strongest correlation with slip grade. This parameter can be used in the clinical setting with PACS software capabilities to assess LSK. A computer-assisted technique is recommended in order to increase the reliability of the measurement of LSK in spondylolisthesis. PMID:19015898

  19. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  20. Field Instrumentation With Bricks: Wireless Networks Built From Tough, Cheap, Reliable Field Computers

    NASA Astrophysics Data System (ADS)

    Fatland, D. R.; Anandakrishnan, S.; Heavner, M.

    2004-12-01

    We describe tough, cheap, reliable field computers configured as wireless networks for distributed high-volume data acquisition and low-cost data recovery. Running under the GNU/Linux open source model these network nodes ('Bricks') are intended for either autonomous or managed deployment for many months in harsh Arctic conditions. We present here results from Generation-1 Bricks used in 2004 for glacier seismology research in Alaska and Antarctica and describe future generation Bricks in terms of core capabilities and a growing list of field applications. Subsequent generations of Bricks will feature low-power embedded architecture, large data storage capacity (GB), long range telemetry (15 km+ up from 3 km currently), and robust operational software. The list of Brick applications is growing to include Geodetic GPS, Bioacoustics (bats to whales), volcano seismicity, tracking marine fauna, ice sounding via distributed microwave receivers and more. This NASA-supported STTR project capitalizes on advancing computer/wireless technology to get scientists more data per research budget dollar, solving system integration problems and thereby getting researchers out of the hardware lab and into the field. One exemplary scenario: An investigator can install a Brick network in a remote polar environment to collect data for several months and then fly over the site to recover the data via wireless telemetry. In the past year Brick networks have moved beyond proof-of-concept to the full-bore development and testing stage; they will be a mature and powerful tool available for IPY 2007-8.

  1. Mobile-bearing knee systems: ultra-high molecular weight polyethylene wear and design issues.

    PubMed

    Greenwald, A Seth; Heim, Christine S

    2005-01-01

    In June 2004, the U.S. Food and Drug Administration Orthopaedic Advisory Panel recommended the reclassification of mobile-bearing knee systems for general use. This reflects the increasing use of mobile-bearing knee systems internationally, which is currently limited in the United States by regulatory requirement. Mobile-bearing knee systems are distinguished from conventional, fixed-plateau systems in that they allow dual-surface articulation between an ultra-high molecular weight polyethylene insert and metallic femoral and tibial tray components. Their in vivo success is dependent on patient selection, design, and material choice, as well as surgical precision during implantation. Laboratory and clinical experience extending over 25 years with individual systems suggests that mobile-bearing knee systems represent a viable treatment option for patients with knee arthrosis.

  2. Ultimately Reliable Pyrotechnic Systems

    NASA Technical Reports Server (NTRS)

    Scott, John H.; Hinkel, Todd

    2015-01-01

    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  3. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  4. Analysis Article on the Performance Analysis of the OneTouch® UltraVue™ Blood Glucose Monitoring System

    PubMed Central

    Solnica, Bogdan

    2009-01-01

    In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch® UltraVue™ blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. PMID:20144432

  5. An Ultra-High Speed Whole Slide Image Viewing System

    PubMed Central

    Yagi, Yukako; Yoshioka, Shigeatsu; Kyusojin, Hiroshi; Onozato, Maristela; Mizutani, Yoichi; Osato, Kiyoshi; Yada, Hiroaki; Mark, Eugene J.; Frosch, Matthew P.; Louis, David N.

    2012-01-01

    Background: One of the goals for a Whole Slide Imaging (WSI) system is implementation in the clinical practice of pathology. One of the unresolved problems in accomplishing this goal is the speed of the entire process, i.e., from viewing the slides through making the final diagnosis. Most users are not satisfied with the correct viewing speeds of available systems. We have evaluated a new WSI viewing station and tool that focuses on speed. Method: A prototype WSI viewer based on PlayStation®3 with wireless controllers was evaluated at the Department of Pathology at MGH for the following reasons: 1. For the simulation of signing-out cases; 2. Enabling discussion at a consensus conference; and 3. Use at slide seminars during a Continuing Medical Education course. Results: Pathologists were being able to use the system comfortably after 0–15 min training. There were no complaints regarding speed. Most pathologists were satisfied with the functionality, usability and speed of the system. The most difficult situation was simulating diagnostic sign-out. Conclusion: The preliminary results of adapting the Sony PlayStation®3 (PS3®) as an ultra-high speed WSI viewing system were promising. The achieved speed is consistent with what would be needed to use WSI in daily practice. PMID:22063731

  6. An ultra-high speed Whole Slide Image viewing system.

    PubMed

    Yagi, Yukako; Yoshioka, Shigeatsu; Kyusojin, Hiroshi; Onozato, Maristela; Mizutani, Yoichi; Osato, Kiyoshi; Yada, Hiroaki; Mark, Eugene J; Frosch, Matthew P; Louis, David N

    2012-01-01

    One of the goals for a Whole Slide Imaging (WSI) system is implementation in the clinical practice of pathology. One of the unresolved problems in accomplishing this goal is the speed of the entire process, i.e., from viewing the slides through making the final diagnosis. Most users are not satisfied with the correct viewing speeds of available systems. We have evaluated a new WSI viewing station and tool that focuses on speed. A prototype WSI viewer based on PlayStation®3 with wireless controllers was evaluated at the Department of Pathology at MGH for the following reasons: 1. For the simulation of signing-out cases; 2. Enabling discussion at a consensus conference; and 3. Use at slide seminars during a Continuing Medical Education course. Pathologists were being able to use the system comfortably after 0-15 min training. There were no complaints regarding speed. Most pathologists were satisfied with the functionality, usability and speed of the system. The most difficult situation was simulating diagnostic sign-out. The preliminary results of adapting the Sony PlayStation®3 (PS3®) as an ultra-high speed WSI viewing system were promising. The achieved speed is consistent with what would be needed to use WSI in daily practice.

  7. An ultra-high speed whole slide image viewing system.

    PubMed

    Yagi, Yukako; Yoshioka, Shigeatsu; Kyusojin, Hiroshi; Onozato, Maristela; Mizutani, Yoichi; Osato, Kiyoshi; Yada, Hiroaki; Mark, Eugene J; Frosch, Matthew P; Louis, David N

    2012-01-01

    One of the goals for a Whole Slide Imaging (WSI) system is implementation in the clinical practice of pathology. One of the unresolved problems in accomplishing this goal is the speed of the entire process, i.e., from viewing the slides through making the final diagnosis. Most users are not satisfied with the correct viewing speeds of available systems. We have evaluated a new WSI viewing station and tool that focuses on speed. A prototype WSI viewer based on PlayStation®3 with wireless controllers was evaluated at the Department of Pathology at MGH for the following reasons: 1. For the simulation of signing-out cases; 2. Enabling discussion at a consensus conference; and 3. Use at slide seminars during a Continuing Medical Education course. Pathologists were being able to use the system comfortably after 0-15 min training. There were no complaints regarding speed. Most pathologists were satisfied with the functionality, usability and speed of the system. The most difficult situation was simulating diagnostic sign-out. The preliminary results of adapting the Sony PlayStation®3 (PS3®) as an ultra-high speed WSI viewing system were promising. The achieved speed is consistent with what would be needed to use WSI in daily practice.

  8. Accuracy and reliability of tablet computer as an imaging console for detection of radiological signs of acute appendicitis using PACS workstation as reference standard.

    PubMed

    Awais, Muhammad; Khan, Dawar Burhan; Barakzai, Muhammad Danish; Rehman, Abdul; Baloch, Noor Ul-Ain; Nadeem, Naila

    2018-05-01

    To ascertain the accuracy and reliability of tablet as an imaging console for detection of radiological signs of acute appendicitis [on focused appendiceal computed tomography (FACT)] using Picture Archiving and Communication System (PACS) workstation as reference standard. From January, 2014 to June, 2015, 225 patients underwent FACT at our institution. These scans were blindly re-interpreted by an independent consultant radiologist, first on PACS workstation and, two weeks later, on tablet. Scans were interpreted for the presence of radiological signs of acute appendicitis. Accuracy of tablet was calculated using PACS as reference standard. Kappa (κ) statistics were calculated as a measure of reliability. Of 225 patients, 99 had radiological evidence of acute appendicitis on PACS workstation. Tablet was 100% accurate in detecting radiological signs of acute appendicitis. Appendicoliths, free fluid, lymphadenopathy, phlegmon/abscess, and perforation were identified on PACS in 90, 43, 39, 10, and 12 scans, respectively. There was excellent agreement between tablet and PACS for detection of appendicolith (к = 0.924), phlegmon/abscess (к = 0.904), free fluid (к = 0.863), lymphadenopathy (к = 0.879), and perforation (к = 0.904). Tablet computer, as an imaging console, was highly reliable and was as accurate as PACS workstation for the radiological diagnosis of acute appendicitis.

  9. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  10. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  11. Is computed tomography an accurate and reliable method for measuring total knee arthroplasty component rotation?

    PubMed

    Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario

    2016-04-01

    Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.

  12. Reliability and accuracy of Crystaleye spectrophotometric system.

    PubMed

    Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping

    2010-01-01

    to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.

  13. SMUVS: Spitzer Matching survey of the UltraVISTA ultra-deep Stripes

    NASA Astrophysics Data System (ADS)

    Caputi, Karina; Ashby, Matthew; Fazio, Giovanni; Huang, Jiasheng; Dunlop, James; Franx, Marijn; Le Fevre, Olivier; Fynbo, Johan; McCracken, Henry; Milvang-Jensen, Bo; Muzzin, Adam; Ilbert, Olivier; Somerville, Rachel; Wechsler, Risa; Behroozi, Peter; Lu, Yu

    2014-12-01

    We request 2026.5 hours to homogenize the matching ultra-deep IRAC data of the UltraVISTA ultra-deep stripes, producing a final area of ~0.6 square degrees with the deepest near- and mid-IR coverage existing in any such large area of the sky (H, Ks, [3.6], [4.5] ~ 25.3-26.1 AB mag; 5 sigma). The UltraVISTA ultra-deep stripes are contained within the larger COSMOS field, which has a rich collection of multi-wavelength, ancillary data, making it ideal to study different aspects of galaxy evolution with high statistical significance and excellent redshift accuracy. The UltraVISTA ultra-deep stripes are the region of the COSMOS field where these studies can be pushed to the highest redshifts, but securely identifying high-z galaxies, and determining their stellar masses, will only be possible if ultra-deep mid-IR data are available. Our IRAC observations will allow us to: 1) extend the galaxy stellar mass function at redshifts z=3 to z=5 to the intermediate mass regime (M~5x10^9-10^10 Msun), which is critical to constrain galaxy formation models; 2) gain a factor of six in the area where it is possible to effectively search for z>=6 galaxies and study their properties; 3) measure, for the first time, the large-scale structure traced by an unbiased galaxy sample at z=5 to z=7, and make the link to their host dark matter haloes. This cannot be done in any other field of the sky, as the UltraVISTA ultra-deep stripes form a quasi-contiguous, regular-shape field, which has a unique combination of large area and photometric depth. 4) provide a unique resource for the selection of secure z>5 targets for JWST and ALMA follow up. Our observations will have an enormous legacy value which amply justifies this new observing-time investment in the COSMOS field. Spitzer cannot miss this unique opportunity to open up a large 0.6 square-degree window to the early Universe.

  14. Inter- and intraobserver reliability of the vertebral, local and segmental kyphosis in 120 traumatic lumbar and thoracic burst fractures: evaluation in lateral X-rays and sagittal computed tomographies

    PubMed Central

    Brunner, Alexander; Gühring, Markus; Schmälzle, Traude; Weise, Kuno; Badke, Andreas

    2009-01-01

    Evaluation of the kyphosis angle in thoracic and lumbar burst fractures is often used to indicate surgical procedures. The kyphosis angle could be measured as vertebral, segmental and local kyphosis according to the method of Cobb. The vertebral, segmental and local kyphosis according to the method of Cobb were measured at 120 lateral X-rays and sagittal computed tomographies of 60 thoracic and 60 lumbar burst fractures by 3 independent observers on 2 separate occasions. Osteoporotic fractures were excluded. The intra- and interobserver reliability of these angles in X-ray and computed tomogram, using the intra class correlation coefficient (ICC) were evaluated. Highest reproducibility showed the segmental kyphosis followed by the vertebral kyphosis. For thoracic fractures segmental kyphosis shows in X-ray “excellent” inter- and intraobserver reliabilities (ICC 0.826, 0.802) and for lumbar fractures “good” to “excellent” inter- and intraobserver reliabilities (ICC = 0.790, 0.803). In computed tomography, the segmental kyphosis showed “excellent” inter- and intraobserver reliabilities (ICC = 0.824, 0.801) for thoracic and “excellent” inter- and intraobserver reliabilities (ICC = 0.874, 0.835) for the lumbar fractures. Regarding both diagnostic work ups (X-ray and computed tomography), significant differences were evaluated in interobserver reliabilities for vertebral kyphosis measured in lumbar fracture X-rays (p = 0.035) and interobserver reliabilities for local kyphosis, measured in thoracic fracture X-rays (p = 0.010). Regarding both fracture localizations (thoracic and lumbar fractures), significant differences could only be evaluated in interobserver reliabilities for the local kyphosis measured in computed tomographies (p = 0.045) and in intraobserver reliabilities for the vertebral kyphosis measured in X-rays (p = 0.024). “Good” to “excellent” inter- and intraobserver reliabilities for vertebral, segmental and local

  15. High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.

    2012-01-01

    A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection

  16. Comparison of core-shell and totally porous ultra high performance liquid chromatographic stationary phases based on their selectivity towards alfuzosin compounds.

    PubMed

    Szulfer, Jarosław; Plenis, Alina; Bączek, Tomasz

    2014-06-13

    This paper focuses on the application of a column classification system based on the Katholieke Universiteit Leuven for the characterization of physicochemical properties of core-shell and ultra-high performance liquid chromatographic stationary phases, followed by the verification of the reliability of the obtained column classification in pharmaceutical practice. In the study, 7 stationary phases produced in core-shell technology and 18 ultra-high performance liquid chromatographic columns were chromatographically tested, and ranking lists were built on the FKUL-values calculated against two selected reference columns. In the column performance test, an analysis of alfuzosin in the presence of related substances was carried out using the brands of the stationary phases with the highest ranking positions. Next, a system suitability test as described by the European Pharmacopoeia monograph was performed. Moreover, a study was also performed to achieve a purposeful shortening of the analysis time of the compounds of interest using the selected stationary phases. Finally, it was checked whether methods using core-shell and ultra-high performance liquid chromatographic columns can be an interesting alternative to the high-performance liquid chromatographic method for the analysis of alfuzosin in pharmaceutical practice. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  18. Test of the Center for Automated Processing of Hardwoods' Auto-Image Detection and Computer-Based Grading and Cutup System

    Treesearch

    Philip A. Araman; Janice K. Wiedenbeck

    1995-01-01

    Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...

  19. Search for ultra high energy astrophysical neutrinos with the ANITA experiment

    NASA Astrophysics Data System (ADS)

    Romero-Wolf, Andrew

    2010-12-01

    This work describes a search for cosmogenic neutrinos at energies above 1018 eV with the Antarctic Impulsive Transient Antenna (ANITA). ANITA is a balloon-borne radio interferometer designed to measure radio impulsive emission from particle showers produced in the Antarctic ice-sheet by ultra-high energy neutrinos (UHEnu). Flying at 37 km altitude the ANITA detector is sensitive to 1M km3 of ice and is expected to produce the highest exposure to ultra high energy neutrinos to date. The design, flight performance, and analysis of the first flight of ANITA in 2006 are the subject of this dissertation. Due to sparse anthropogenic backgrounds throughout the Antarctic continent, the ANITA analysis depends on high resolution directional reconstruction. An interferometric method was developed that not only provides high resolution but is also sensitive to very weak radio emissions. The results of ANITA provide the strongest constraints on current ultra-high energy neutrino models. In addition there was a serendipitous observation of ultra-high energy cosmic ray geosynchrotron emissions that are of distinct character from the expected neutrino signal. This thesis includes a study of the radio Cherenkov emission from ultra-high energy electromagnetic showers in ice in the time-domain. All previous simulations computed the radio pulse frequency spectrum. I developed a purely time-domain algorithm for computing radiation using the vector potentials of charged particle tracks. The results are fully consistent with previous frequency domain calculations and shed new light into the properties of the radio pulse in the time domain. The shape of the pulse in the time domain is directly related to the depth development of the excess charge in the shower and its width to the observation angle with respect to the Cherenkov direction. This information can be of great practical importance for interpreting actual data.

  20. Reliability of mobile systems in construction

    NASA Astrophysics Data System (ADS)

    Narezhnaya, Tamara; Prykina, Larisa

    2017-10-01

    The purpose of the article is to analyze the influence of the mobility of construction production in the article taking into account the properties of reliability and readiness. Basing on the studied systems the effectiveness and efficiency is estimated. The construction system is considered to be the complete organizational structure providing creation or updating of construction facilities. At the same time the production sphere of these systems joins the production on the building site itself, material and technical resources of the construction production and live labour in these spheres within the construction dynamics. The author concludes, that the estimation of the degree of mobility of systems the of construction production makes a great positive effect in the project.

  1. Reliability of semiautomated computational methods for estimating tibiofemoral contact stress in the Multicenter Osteoarthritis Study.

    PubMed

    Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A

    2012-01-01

    Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  2. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  3. Programmable Ultra Lightweight System Adaptable Radio (PULSAR) Low Cost Telemetry - Access from Space Advanced Technologies or Down the Middle

    NASA Technical Reports Server (NTRS)

    Sims. Herb; Varnavas, Kosta; Eberly, Eric

    2013-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 1990's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast, presently qualified satellite transponder applications were developed during the early 1960's space program. Programmable Ultra Lightweight System Adaptable Radio (PULSAR, NASA-MSFC SDR) technology revolutionizes satellite transponder technology by increasing data through-put capability by, at least, an order of magnitude. PULSAR leverages existing Marshall Space Flight Center SDR designs and commercially enhanced capabilities to provide a path to a radiation tolerant SDR transponder. These innovations will (1) reduce the cost of NASA Low Earth Orbit (LEO) and Deep Space transponders, (2) decrease power requirements, and (3) a commensurate volume reduction. Also, PULSAR increases flexibility to implement multiple transponder types by utilizing the same hardware with altered logic - no analog hardware change is required - all of which can be accomplished in orbit. This provides high capability, low cost, transponders to programs of all sizes. The final project outcome would be the introduction of a Technology Readiness Level (TRL) 7 low-cost CubeSat to SmallSat telemetry system into the NASA Portfolio.

  4. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  5. [Reliability of a positron emission tomography system (CTI:PT931/04-12)].

    PubMed

    Watanuki, Shoichi; Ishii, Keizo; Itoh, Masatoshi; Orihara, Hikonojyo

    2002-05-01

    The maintenance data of a PET system (PT931/04-12 CTI Inc.) was analyzed to evaluate its reliability. We examined whether the initial performance for the system resolution and efficiency is kept. The reliability of the PET system was evaluated from the value of MTTF (mean time to failure) and MTBF (mean time between failures) for each part of the system obtained from the maintenance data for 13 years. The initial performance was kept for the resolution, but the efficiency decreased to 72% of the initial value. The 83% of the troubles of the system was for detector block (DB) and DB control module (BC). The MTTF of DB and BC were 2,733 and 3,314 days, and the MTBF of DB and BC per detector ring were 38 and 114 days. The MTBF of the system was 23 days. We found seasonal dependence for the number of troubles of DB and BC. This means that the trouble may be related the humidity. The reliability of the PET system strongly depends on the MTBF of DB and BC. The improvement in quality of these parts and optimization of the environment in operation may increase the reliability of the PET system. For the popularization of PET, it is effective to evaluate the reliability of the system and to show it to the users.

  6. Ultra-narrow EIA spectra of 85Rb atom in a degenerate Zeeman multiplet system

    NASA Astrophysics Data System (ADS)

    Rehman, Hafeez Ur; Qureshi, Muhammad Mohsin; Noh, Heung-Ryoul; Kim, Jin-Tae

    2015-05-01

    Ultra-narrow EIA spectral features of thermal 85Rb atom with respect to coupling Rabi frequencies in a degenerate Zeeman multiplet system have been unraveled in the cases of same (σ+ -σ+ , π ∥ π) and orthogonal (σ+ -σ- , π ⊥ π)polarization configurations. The EIA signals with subnatural linewidth of ~ 100 kHz even in the cases of same circular and linear polarizations of coupling and probe laser have been obtained for the first time theoretically and experimentally. In weak coupling power limit of orthogonal polarization configurations, time-dependent transfer of coherence plays major role in the splitting of the EIA spectra while in strong coupling power, Mollow triplet-like mechanism due to strong power bring into broad split feature. The experimental ultra-narrow EIA features using one laser combined with an AOM match well with simulated spectra obtained by using generalized time-dependent optical Bloch equations.

  7. A Novel Approach to Photonic Generation and Modulation of Ultra-Wideband Pulses

    NASA Astrophysics Data System (ADS)

    Xiang, Peng; Guo, Hao; Chen, Dalei; Zhu, Huatao

    2016-01-01

    A novel approach to photonic generation of ultra-wideband (UWB) signals is proposed in this paper. The proposed signal generator is capable of generating UWB doublet pulses with flexible reconfigurability, and many different pulse modulation formats, including the commonly used pulse-position modulation (PPM) and bi-phase modulation (BPM) can be realized. Moreover, the photonic UWB pulse generator is capable of generating UWB signals with a tunable spectral notch-band, which is desirable to realize the interference avoidance between UWB and other narrow band systems, such as Wi-Fi. A mathematical model describing the proposed system is developed and the generation of UWB signals with different modulation formats is demonstrated via computer simulations.

  8. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  9. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  10. Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,

  11. The revised NEUROGES-ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture.

    PubMed

    Lausberg, Hedda; Sloetjes, Han

    2016-09-01

    As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES-ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES-ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.

  12. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  13. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  14. Cochlear Implant Electrode Localization Using an Ultra-High Resolution Scan Mode on Conventional 64-Slice and New Generation 192-Slice Multi-Detector Computed Tomography.

    PubMed

    Carlson, Matthew L; Leng, Shuai; Diehn, Felix E; Witte, Robert J; Krecke, Karl N; Grimes, Josh; Koeller, Kelly K; Bruesewitz, Michael R; McCollough, Cynthia H; Lane, John I

    2017-08-01

    A new generation 192-slice multi-detector computed tomography (MDCT) clinical scanner provides enhanced image quality and superior electrode localization over conventional MDCT. Currently, accurate and reliable cochlear implant electrode localization using conventional MDCT scanners remains elusive. Eight fresh-frozen cadaveric temporal bones were implanted with full-length cochlear implant electrodes. Specimens were subsequently scanned with conventional 64-slice and new generation 192-slice MDCT scanners utilizing ultra-high resolution modes. Additionally, all specimens were scanned with micro-CT to provide a reference criterion for electrode position. Images were reconstructed according to routine temporal bone clinical protocols. Three neuroradiologists, blinded to scanner type, reviewed images independently to assess resolution of individual electrodes, scalar localization, and severity of image artifact. Serving as the reference standard, micro-CT identified scalar crossover in one specimen; imaging of all remaining cochleae demonstrated complete scala tympani insertions. The 192-slice MDCT scanner exhibited improved resolution of individual electrodes (p < 0.01), superior scalar localization (p < 0.01), and reduced blooming artifact (p < 0.05), compared with conventional 64-slice MDCT. There was no significant difference between platforms when comparing streak or ring artifact. The new generation 192-slice MDCT scanner offers several notable advantages for cochlear implant imaging compared with conventional MDCT. This technology provides important feedback regarding electrode position and course, which may help in future optimization of surgical technique and electrode design.

  15. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 2: HARP tutorial

    NASA Technical Reports Server (NTRS)

    Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.

  16. Weighing Ultra-Cool Stars

    NASA Astrophysics Data System (ADS)

    2004-05-01

    Large Ground-Based Telescopes and Hubble Team-Up to Perform First Direct Brown Dwarf Mass Measurement [1] Summary Using ESO's Very Large Telescope at Paranal and a suite of ground- and space-based telescopes in a four-year long study, an international team of astronomers has measured for the first time the mass of an ultra-cool star and its companion brown dwarf. The two stars form a binary system and orbit each other in about 10 years. The team obtained high-resolution near-infrared images; on the ground, they defeated the blurring effect of the terrestrial atmosphere by means of adaptive optics techniques. By precisely determining the orbit projected on the sky, the astronomers were able to measure the total mass of the stars. Additional data and comparison with stellar models then yield the mass of each of the components. The heavier of the two stars has a mass around 8.5% of the mass of the Sun and its brown dwarf companion is even lighter, only 6% of the solar mass. Both objects are relatively young with an age of about 500-1,000 million years. These observations represent a decisive step towards the still missing calibration of stellar evolution models for very-low mass stars. PR Photo 19a/04: Orbit of the ultra-cool stars in 2MASSW J0746425+2000321. PR Photo 19b/04: Animated Gif of the orbital motion. Telephone number star Even though astronomers have found several hundreds of very low mass stars and brown dwarfs, the fundamental properties of these extreme objects, such as masses and surface temperatures, are still not well known. Within the cosmic zoo, these ultra-cool stars represent a class of "intermediate" objects between giant planets - like Jupiter - and "normal" stars less massive than our Sun, and to understand them well is therefore crucial to the field of stellar astrophysics. The problem with these ultra-cool stars is that contrary to normal stars that burn hydrogen in their central core, no unique relation exists between the luminosity of the

  17. Effective Measurement of Reliability of Repairable USAF Systems

    DTIC Science & Technology

    2012-09-01

    Hansen presented a course, Concepts and Models for Repairable Systems Reliability, at the 2009 Centro de Investigacion en Mathematicas ( CIMAT ). The...recurrent event by calculating the mean quantity of recurrent events of the population of systems at risk at that point in time. The number of systems at... risk is the number of systems that are operating and providing information. [9] Information can be obscured by data censoring and truncation. One

  18. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  19. Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors

    PubMed Central

    Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-01-01

    Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443

  20. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions