Sample records for software reliability studies

  1. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less

  2. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less

  3. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  4. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  5. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  6. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  7. An empirical study of flight control software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.

  8. Real-time software failure characterization

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Finelli, George B.

    1990-01-01

    A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.

  9. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  10. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Wilson, Larry W.

    1989-01-01

    The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

  11. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  12. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  13. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  14. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  15. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  16. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    PubMed

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  17. The probability estimation of the electronic lesson implementation taking into account software reliability

    NASA Astrophysics Data System (ADS)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  18. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  19. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  20. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  1. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation

    PubMed Central

    2014-01-01

    Background A balance test provides important information such as the standard to judge an individual’s functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Methods Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). Results The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. Conclusion The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment. PMID:24912769

  2. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    PubMed

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  3. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  4. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  5. Development and assessment of a digital X-ray software tool to determine vertebral rotation in adolescent idiopathic scoliosis.

    PubMed

    Eijgenraam, Susanne M; Boselie, Toon F M; Sieben, Judith M; Bastiaenen, Caroline H G; Willems, Paul C; Arts, Jacobus J; Lataster, Arno

    2017-02-01

    The amount of vertebral rotation in the axial plane is of key importance in the prognosis and treatment of adolescent idiopathic scoliosis (AIS). Current methods to determine vertebral rotation are either designed for use in analogue plain radiographs and not useful in digital images, or lack measurement precision and are therefore less suitable for the follow-up of rotation in AIS patients. This study aimed to develop a digital X-ray software tool with high measurement precision to determine vertebral rotation in AIS, and to assess its (concurrent) validity and reliability. In this study a combination of basic science and reliability methodology applied in both laboratory and clinical settings was used. Software was developed using the algorithm of the Perdriolle torsion meter for analogue AP plain radiographs of the spine. Software was then assessed for (1) concurrent validity and (2) intra- and interobserver reliability. Plain radiographs of both human cadaver vertebrae and outpatient AIS patients were used. Concurrent validity was measured by two independent observers, both experienced in the assessment of plain radiographs. Reliability-measurements were performed by three independent spine surgeons. Pearson correlation of the software compared with the analogue Perdriolle torsion meter for mid-thoracic vertebrae was 0.98, for low-thoracic vertebrae 0.97 and for lumbar vertebrae 0.97. Measurement exactness of the software was within 5° in 62% of cases and within 10° in 97% of cases. Intraclass correlation coefficient (ICC) for inter-observer reliability was 0.92 (0.91-0.95), ICC for intra-observer reliability was 0.96 (0.94-0.97). We developed a digital X-ray software tool to determine vertebral rotation in AIS with a substantial concurrent validity and reliability, which may be useful for the follow-up of vertebral rotation in AIS patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  7. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  8. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  9. Assessment of a spectral domain OCT segmentation software in a retrospective cohort study of exudative AMD patients.

    PubMed

    Tilleul, Julien; Querques, Giuseppe; Canoui-Poitrine, Florence; Leveziel, Nicolas; Souied, Eric H

    2013-01-01

    To assess the ability of the Spectralis optical coherence tomography (OCT) segmentation software to identify the inner limiting membrane and Bruch's membrane in exudative age-related macular degeneration (AMD) patients. Thirty-eight eyes of 38 naive exudative AMD patients were retrospectively included. They all had a complete ophthalmologic examination including Spectralis OCT at baseline, at month 1 and 2. Reliability of the segmentation software was assessed by 2 ophthalmologists. Reliability of the segmentation software was defined as good if both inner limiting membrane and Bruch's membrane were correctly drawn. A total of 38 patients charts were reviewed (114 scans). The inner limiting membrane was correctly drawn by the segmentation software in 114/114 spectral domain OCT scans (100%). Conversely, Bruch's membrane was correctly drawn in 59/114 scans (51.8%). The software was less reliable in locating Bruch's membrane in case of pigment epithelium detachment (PED) than without PED (42.5 vs. 73.5%, respectively; p = 0.049), but its reliability was not associated with SRF or CME (p = 0.55 and p = 0.10, respectively). Segmentation of the inner limiting membrane was constantly trustworthy but Bruch's membrane segmentation was poorly reliable using the automatic Spectralis segmentation software. Based on this software, evaluation of retinal thickness may be incorrect, particularly in case of PED. PED is effectively an important parameter which is not included when measuring retinal thickness. Copyright © 2012 S. Karger AG, Basel.

  10. A methodology for producing reliable software, volume 1

    NASA Technical Reports Server (NTRS)

    Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.

    1976-01-01

    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.

  11. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  12. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  13. Evaluation of the efficiency and reliability of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1994-01-01

    There are numerous studies which show that CASE Tools greatly facilitate software development. As a result of these advantages, an increasing amount of software development is done with CASE Tools. As more software engineers become proficient with these tools, their experience and feedback lead to further development with the tools themselves. What has not been widely studied, however, is the reliability and efficiency of the actual code produced by the CASE Tools. This investigation considered these matters. Three segments of code generated by MATRIXx, one of many commercially available CASE Tools, were chosen for analysis: ETOFLIGHT, a portion of the Earth to Orbit Flight software, and ECLSS and PFMC, modules for Environmental Control and Life Support System and Pump Fan Motor Control, respectively.

  14. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  15. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  16. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  17. Periorbital Biometric Measurements using ImageJ Software: Standardisation of Technique and Assessment Of Intra- and Interobserver Variability

    PubMed Central

    Rajyalakshmi, R.; Prakash, Winston D.; Ali, Mohammad Javed; Naik, Milind N.

    2017-01-01

    Purpose: To assess the reliability and repeatability of periorbital biometric measurements using ImageJ software and to assess if the horizontal visible iris diameter (HVID) serves as a reliable scale for facial measurements. Methods: This study was a prospective, single-blind, comparative study. Two clinicians performed 12 periorbital measurements on 100 standardised face photographs. Each individual’s HVID was determined by Orbscan IIz and used as a scale for measurements using ImageJ software. All measurements were repeated using the ‘average’ HVID of the study population as a measurement scale. Intraclass correlation coefficient (ICC) and Pearson product-moment coefficient were used as statistical tests to analyse the data. Results: The range of ICC for intra- and interobserver variability was 0.79–0.99 and 0.86–0.99, respectively. Test-retest reliability ranged from 0.66–1.0 to 0.77–0.98, respectively. When average HVID of the study population was used as scale, ICC ranged from 0.83 to 0.99, and the test-retest reliability ranged from 0.83 to 0.96 and the measurements correlated well with recordings done with individual Orbscan HVID measurements. Conclusion: Periorbital biometric measurements using ImageJ software are reproducible and repeatable. Average HVID of the population as measured by Orbscan is a reliable scale for facial measurements. PMID:29403183

  18. Blended Training on Scientific Software: A Study on How Scientific Data Are Generated

    ERIC Educational Resources Information Center

    Skordaki, Efrosyni-Maria; Bainbridge, Susan

    2018-01-01

    This paper presents the results of a research study on scientific software training in blended learning environments. The investigation focused on training approaches followed by scientific software users whose goal is the reliable application of such software. A key issue in current literature is the requirement for a theory-substantiated…

  19. Automatic Speech Recognition: Reliability and Pedagogical Implications for Teaching Pronunciation

    ERIC Educational Resources Information Center

    Kim, In-Seok

    2006-01-01

    This study examines the reliability of automatic speech recognition (ASR) software used to teach English pronunciation, focusing on one particular piece of software, "FluSpeak, as a typical example." Thirty-six Korean English as a Foreign Language (EFL) college students participated in an experiment in which they listened to 15 sentences…

  20. A Survey of Software Reliability Modeling and Estimation

    DTIC Science & Technology

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  1. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  2. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

  3. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  4. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  5. Reliability and accuracy of three imaging software packages used for 3D analysis of the upper airway on cone beam computed tomography images.

    PubMed

    Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane

    2017-08-01

    The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.

  6. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  7. An analysis of functional shoulder movements during task performance using Dartfish movement analysis software.

    PubMed

    Khadilkar, Leenesh; MacDermid, Joy C; Sinden, Kathryn E; Jenkyn, Thomas R; Birmingham, Trevor B; Athwal, George S

    2014-01-01

    Video-based movement analysis software (Dartfish) has potential for clinical applications for understanding shoulder motion if functional measures can be reliably obtained. The primary purpose of this study was to describe the functional range of motion (ROM) of the shoulder used to perform a subset of functional tasks. A second purpose was to assess the reliability of functional ROM measurements obtained by different raters using Dartfish software. Ten healthy participants, mean age 29 ± 5 years, were videotaped while performing five tasks selected from the Disabilities of the Arm, Shoulder and Hand (DASH). Video cameras and markers were used to obtain video images suitable for analysis in Dartfish software. Three repetitions of each task were performed. Shoulder movements from all three repetitions were analyzed using Dartfish software. The tracking tool of the Dartfish software was used to obtain shoulder joint angles and arcs of motion. Test-retest and inter-rater reliability of the measurements were evaluated using intraclass correlation coefficients (ICC). Maximum (coronal plane) abduction (118° ± 16°) and (sagittal plane) flexion (111° ± 15°) was observed during 'washing one's hair;' maximum extension (-68° ± 9°) was identified during 'washing one's own back.' Minimum shoulder ROM was observed during 'opening a tight jar' (33° ± 13° abduction and 13° ± 19° flexion). Test-retest reliability (ICC = 0.45 to 0.94) suggests high inter-individual task variability, and inter-rater reliability (ICC = 0.68 to 1.00) showed moderate to excellent agreement. KEY FINDINGS INCLUDE: 1) functional shoulder ROM identified in this study compared to similar studies; 2) healthy individuals require less than full ROM when performing five common ADL tasks 3) high participant variability was observed during performance of the five ADL tasks; and 4) Dartfish software provides a clinically relevant tool to analyze shoulder function.

  8. Technical Concept Document. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-02-28

    FeNbry 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive for...February 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive...accordance with the DFARS Special Works Clause Developed by: This document, developed under the Software Technology for Adaptable, Reliable Systems

  9. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  10. Production of Reliable Flight Crucial Software: Validation Methods Research for Fault Tolerant Avionics and Control Systems Sub-Working Group Meeting

    NASA Technical Reports Server (NTRS)

    Dunham, J. R. (Editor); Knight, J. C. (Editor)

    1982-01-01

    The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research.

  11. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  12. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  13. Survey of Software Assurance Techniques for Highly Reliable Systems

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy

    2004-01-01

    This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.

  14. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  15. Reliability of new software in measuring cervical multifidus diameters and shoulder muscle strength in a synchronized way; an ultrasonographic study

    PubMed Central

    Rahnama, Leila; Rezasoltani, Asghar; Khalkhali-Zavieh, Minoo; Rahnama, Behnam; Noori-Kochi, Farhang

    2015-01-01

    OBJECTIVES: This study was conducted with the purpose of evaluating the inter-session reliability of new software to measure the diameters of the cervical multifidus muscle (CMM), both at rest and during isometric contractions of the shoulder abductors in subjects with neck pain and in healthy individuals. METHOD: In the present study, the reliability of measuring the diameters of the CMM with the Sonosynch software was evaluated by using 24 participants, including 12 subjects with chronic neck pain and 12 healthy individuals. The anterior-posterior diameter (APD) and the lateral diameter (LD) of the CMM were measured in a resting state and then repeated during isometric contraction of the shoulder abductors. Measurements were taken on separate occasions 3 to 7 days apart in order to determine inter-session reliability. Intraclass correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable difference (SDD) were used to evaluate the relative and absolute reliability, respectively. RESULTS: The Sonosynch software has shown to be highly reliable in measuring the diameters of the CMM both in healthy subjects and in those with neck pain. The ICCs 95% CI for APD ranged from 0.84 to 0.94 in subjects with neck pain and from 0.86 to 0.94 in healthy subjects. For LD, the ICC 95% CI ranged from 0.64 to 0.95 in subjects with neck pain and from 0.82 to 0.92 in healthy subjects. CONCLUSIONS: Ultrasonographic measurement of the diameters of the CMM using Sonosynch has proved to be reliable especially for APD in healthy subjects as well as subjects with neck pain. PMID:26443975

  16. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  17. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  18. Developing of an automation for therapy dosimetry systems by using labview software

    NASA Astrophysics Data System (ADS)

    Aydin, Selim; Kam, Erol

    2018-06-01

    Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.

  19. An experimental investigation of fault tolerant software structures in an avionics application

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Eckhardt, Dave E., Jr.

    1989-01-01

    The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.

  20. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  1. Reliability and accuracy analysis of a new semiautomatic radiographic measurement software in adult scoliosis.

    PubMed

    Aubin, Carl-Eric; Bellefleur, Christian; Joncas, Julie; de Lanauze, Dominic; Kadoury, Samuel; Blanke, Kathy; Parent, Stefan; Labelle, Hubert

    2011-05-20

    Radiographic software measurement analysis in adult scoliosis. To assess the accuracy as well as the intra- and interobserver reliability of measuring different indices on preoperative adult scoliosis radiographs using a novel measurement software that includes a calibration procedure and semiautomatic features to facilitate the measurement process. Scoliosis requires a careful radiographic evaluation to assess the deformity. Manual and computer radiographic process measures have been studied extensively to determine the reliability and reproducibility in adolescent idiopathic scoliosis. Most studies rely on comparing given measurements, which are repeated by the same user or by an expert user. A given measure with a small intra- or interobserver error might be deemed as good repeatability, but all measurements might not be truly accurate because the ground-truth value is often unknown. Thorough accuracy assessment of radiographic measures is necessary to assess scoliotic deformities, compare these measures at different stages or to permit valid multicenter studies. Thirty-four sets of adult scoliosis digital radiographs were measured two times by three independent observers using a novel radiographic measurement software that includes semiautomatic features to facilitate the measurement process. Twenty different measures taken from the Spinal Deformity Study Group radiographic measurement manual were performed on the coronal and sagittal images. Intra- and intermeasurer reliability for each measure was assessed. The accuracy of the measurement software was also assessed using a physical spine model in six different scoliotic configurations as a true reference. The majority of the measures demonstrated good to excellent intra- and intermeasurer reliability, except for sacral obliquity. The standard variation of all the measures was very small: ≤ 4.2° for Cobb angles, ≤ 4.2° for the kyphosis, ≤ 5.7° for the lordosis, ≤ 3.9° for the pelvic angles, and ≤5.3° for the sacral angles. The variability in the linear measurements (distances) was <4 mm. The variance of the measures was 1.7 and 2.6 times greater, respectively, for the angular and linear measures between the inter- and intrameasurer reliability. The image quality positively influenced the intermeasurer reliability especially for the proximal thoracic Cobb angle, T10-L2 lordosis, sacral slope and L5 seating. The accuracy study revealed that on average the difference in the angular measures was < 2° for the Cobb angles, and < 4° for the other angles, except T2-T12 kyphosis (5.3°). The linear measures were all <3.5 mm difference on average. The majority of the measures, which were analyzed in this study demonstrated good to excellent reliability and accuracy. The novel semiautomatic measurement software can be recommended for use for clinical, research or multicenter study purposes.

  2. Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.

    PubMed

    Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert

    2017-08-01

    Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.

  3. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  4. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  5. Evaluating software development characteristics: Assessment of software measures in the Software Engineering Laboratory. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1981-01-01

    Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.

  6. The Validation of a Software Evaluation Instrument.

    ERIC Educational Resources Information Center

    Schmitt, Dorren Rafael

    This study, conducted at six southern universities, analyzed the validity and reliability of a researcher developed instrument designed to evaluate educational software in secondary mathematics. The instrument called the Instrument for Software Evaluation for Educators uses measurement scales, presents a summary section of the evaluation, and…

  7. Trends in software reliability for digital flight control

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Hecht, M.

    1983-01-01

    Software error data of major recent Digital Flight Control Systems Development Programs. The report summarizes the data, compare these data with similar data from previous surveys and identifies trends and disciplines to improve software reliability.

  8. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  9. Cost Estimation of Software Development and the Implications for the Program Manager

    DTIC Science & Technology

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  10. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  11. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  12. Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.

  13. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  14. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    PubMed

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  15. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  16. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  17. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  18. Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)

    NASA Technical Reports Server (NTRS)

    Benson, Markland

    2008-01-01

    The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.

  19. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  20. A Model for Assessing the Liability of Seemingly Correct Software

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.

    1991-01-01

    Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.

  1. Effect of system workload on operating system reliability - A study on IBM 3081

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Rossetti, D. J.

    1985-01-01

    This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.

  2. Infusing Reliability Techniques into Software Safety Analysis

    NASA Technical Reports Server (NTRS)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  3. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    NASA Astrophysics Data System (ADS)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  4. Reliability of a Novel CBCT-Based 3D Classification System for Maxillary Canine Impactions in Orthodontics: The KPG Index

    PubMed Central

    Visconti, Luca; Martin, Conchita

    2013-01-01

    The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation. PMID:24235889

  5. A second generation experiment in fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The primary goal was to determine whether the application of fault tolerance to software increases its reliability if the cost of production is the same as for an equivalent nonfault tolerance version derived from the same requirements specification. Software development protocols are discussed. The feasibility of adapting to software design fault tolerance the technique of N-fold Modular Redundancy with majority voting was studied.

  6. Inter- and intrarater reliability of the Chicago Classification in pediatric high-resolution esophageal manometry recordings.

    PubMed

    Singendonk, M M J; Smits, M J; Heijting, I E; van Wijk, M P; Nurko, S; Rosen, R; Weijenborg, P W; Abu-Assi, R; Hoekman, D R; Kuizenga-Wessel, S; Seiboth, G; Benninga, M A; Omari, T I; Kritas, S

    2015-02-01

    The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort. Thirty pediatric solid state HRM recordings (13M; mean age 12.1 ± 5.1 years) assessing 10 liquid swallows per patient were analyzed twice by 11 raters (six experts, five non-experts). Software-placed anatomical landmarks required manual adjustment or removal. Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), contractile front velocity (CFV), distal latency (DL) and break size (BS), and an overall CC diagnosis were software-generated. In addition, raters provided their subjective CC diagnosis. Reliability was calculated with Cohen's and Fleiss' kappa (κ) and intraclass correlation coefficient (ICC). Intra- and interrater reliability of software-generated CC diagnosis after manual adjustment of landmarks was substantial (mean κ = 0.69 and 0.77 respectively) and moderate-substantial for subjective CC diagnosis (mean κ = 0.70 and 0.58 respectively). Reliability of both software-generated and subjective diagnosis of normal motility was high (κ = 0.81 and κ = 0.79). Intra- and interrater reliability were excellent for IRP4s, DCI, and BS. Experts had higher interrater reliability than non-experts for DL (ICC = 0.65 vs ICC = 0.36 respectively) and the software-generated diagnosis diffuse esophageal spasm (DES, κ = 0.64 vs κ = 0.30). Among experts, the reliability for the subjective diagnosis of achalasia and esophageal gastric junction outflow obstruction was moderate-substantial (κ = 0.45-0.82). Inter- and intrarater reliability of software-based CC diagnosis of pediatric HRM recordings was high overall. However, experience was a factor influencing the diagnosis of some motility disorders, particularly DES and achalasia. © 2014 John Wiley & Sons Ltd.

  7. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    PubMed

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  8. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...

  9. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  10. Development of a calibrated software reliability model for flight and supporting ground software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1991-01-01

    The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.

  11. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  12. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

  13. Hardware and software reliability estimation using simulations

    NASA Technical Reports Server (NTRS)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  14. Toward improved peptide feature detection in quantitative proteomics using stable isotope labeling.

    PubMed

    Nilse, Lars; Sigloch, Florian Christoph; Biniossek, Martin L; Schilling, Oliver

    2015-08-01

    Reliable detection of peptides in LC-MS data is a key algorithmic step in the analysis of quantitative proteomics experiments. While highly abundant peptides can be detected reliably by most modern software tools, there is much less agreement on medium and low-intensity peptides in a sample. The choice of software tools can have a big impact on the quantification of proteins, especially for proteins that appear in lower concentrations. However, in many experiments, it is precisely this region of less abundant but substantially regulated proteins that holds the biggest potential for discoveries. This is particularly true for discovery proteomics in the pharmacological sector with a specific interest in key regulatory proteins. In this viewpoint article, we discuss how the development of novel software algorithms allows us to study this region of the proteome with increased confidence. Reliable results are one of many aspects to be considered when deciding on a bioinformatics software platform. Deployment into existing IT infrastructures, compatibility with other software packages, scalability, automation, flexibility, and support need to be considered and are briefly addressed in this viewpoint article. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  16. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  17. Reliability Engineering for Service Oriented Architectures

    DTIC Science & Technology

    2013-02-01

    Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit

  18. Development of confidence limits by pivotal functions for estimating software reliability

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.

  19. MYRaf: A new Approach with IRAF for Astronomical Photometric Reduction

    NASA Astrophysics Data System (ADS)

    Kilic, Y.; Shameoni Niaei, M.; Özeren, F. F.; Yesilyaprak, C.

    2016-12-01

    In this study, the design and some developments of MYRaf software for astronomical photometric reduction are presented. MYRaf software is an easy to use, reliable, and has a fast IRAF aperture photometry GUI tools. MYRaf software is an important step for the automated software process of robotic telescopes, and uses IRAF, PyRAF, matplotlib, ginga, alipy, and Sextractor with the general-purpose and high-level programming language Python and uses the QT framework.

  20. STAMPS: development and verification of swallowing kinematic analysis software.

    PubMed

    Lee, Woo Hyung; Chun, Changmook; Seo, Han Gil; Lee, Seung Hak; Oh, Byung-Mo

    2017-10-17

    Swallowing impairment is a common complication in various geriatric and neurodegenerative diseases. Swallowing kinematic analysis is essential to quantitatively evaluate the swallowing motion of the oropharyngeal structures. This study aims to develop a novel swallowing kinematic analysis software, called spatio-temporal analyzer for motion and physiologic study (STAMPS), and verify its validity and reliability. STAMPS was developed in MATLAB, which is one of the most popular platforms for biomedical analysis. This software was constructed to acquire, process, and analyze the data of swallowing motion. The target of swallowing structures includes bony structures (hyoid bone, mandible, maxilla, and cervical vertebral bodies), cartilages (epiglottis and arytenoid), soft tissues (larynx and upper esophageal sphincter), and food bolus. Numerous functions are available for the spatiotemporal parameters of the swallowing structures. Testing for validity and reliability was performed in 10 dysphagia patients with diverse etiologies and using the instrumental swallowing model which was designed to mimic the motion of the hyoid bone and the epiglottis. The intra- and inter-rater reliability tests showed excellent agreement for displacement and moderate to excellent agreement for velocity. The Pearson correlation coefficients between the measured and instrumental reference values were nearly 1.00 (P < 0.001) for displacement and velocity. The Bland-Altman plots showed good agreement between the measurements and the reference values. STAMPS provides precise and reliable kinematic measurements and multiple practical functionalities for spatiotemporal analysis. The software is expected to be useful for researchers who are interested in the swallowing motion analysis.

  1. Assistant for Specifying Quality Software (ASQS) Mission Area Analysis

    DTIC Science & Technology

    1990-12-01

    somewhat arbitrary, it was a reasonable and fast approach for partitioning the mission and software domains. The MAD builds on work done by Boeing Aerospace...Reliability ++ Reliability +++ Response 2: NO Discussion: A NO response implies intermittent burns -- most likely to perform attitude control functions...Propulsion Reliability +++ Reliability ++ 4-15 4.8.3 Query BT.3 Query: For intermittent thruster firing requirements, will the average burn time be less than

  2. Intra- and interrater reliability of the Chicago Classification of achalasia subtypes in pediatric high-resolution esophageal manometry (HRM) recordings.

    PubMed

    Singendonk, M M J; Rosen, R; Oors, J; Rommel, N; van Wijk, M P; Benninga, M A; Nurko, S; Omari, T I

    2017-11-01

    Subtyping achalasia by high-resolution manometry (HRM) is clinically relevant as response to therapy and prognosis have shown to vary accordingly. The aim of this study was to assess inter- and intrarater reliability of diagnosing achalasia and achalasia subtyping in children using the Chicago Classification (CC) V3.0. Six observers analyzed 40 pediatric HRM recordings (22 achalasia and 18 non-achalasia) twice by using dedicated analysis software (ManoView 3.0, Given Imaging, Los Angeles, CA, USA). Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), intrabolus pressurization pattern (IBP), and distal latency (DL) were extracted and analyzed hierarchically. Cohen's κ (2 raters) and Fleiss' κ (>2 raters) and the intraclass correlation coefficient (ICC) were used for categorical and ordinal data, respectively. Based on the results of dedicated analysis software only, intra- and interrater reliability was excellent and moderate (κ=0.89 and κ=0.52, respectively) for differentiating achalasia from non-achalasia. For subtyping achalasia, reliability decreased to substantial and fair (κ=0.72 and κ=0.28, respectively). When observers were allowed to change the software-driven diagnosis according to their own interpretation of the manometric patterns, intra- and interrater reliability increased for diagnosing achalasia (κ=0.98 and κ=0.92, respectively) and for subtyping achalasia (κ=0.79 and κ=0.58, respectively). Intra- and interrater agreement for diagnosing achalasia when using HRM and the CC was very good to excellent when results of automated analysis software were interpreted by experienced observers. More variability was seen when relying solely on the software-driven diagnosis and for subtyping achalasia. Therefore, diagnosing and subtyping achalasia should be performed in pediatric motility centers with significant expertise. © 2017 John Wiley & Sons Ltd.

  3. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  4. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  5. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  6. The determination of measures of software reliability

    NASA Technical Reports Server (NTRS)

    Maxwell, F. D.; Corn, B. C.

    1978-01-01

    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  7. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  8. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  9. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less

  10. Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement.

    PubMed

    Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi

    2015-03-01

    Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Photographs were found to have high reliability coefficient (P > 0.05). The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement.

  11. Expert system verification and validation study. Delivery 3A and 3B: Trip summaries

    NASA Technical Reports Server (NTRS)

    French, Scott

    1991-01-01

    Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.

  12. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  13. Bayesian accrual prediction for interim review of clinical studies: open source R package and smartphone application.

    PubMed

    Jiang, Yu; Guarino, Peter; Ma, Shuangge; Simon, Steve; Mayo, Matthew S; Raghavan, Rama; Gajewski, Byron J

    2016-07-22

    Subject recruitment for medical research is challenging. Slow patient accrual leads to increased costs and delays in treatment advances. Researchers need reliable tools to manage and predict the accrual rate. The previously developed Bayesian method integrates researchers' experience on former trials and data from an ongoing study, providing a reliable prediction of accrual rate for clinical studies. In this paper, we present a user-friendly graphical user interface program developed in R. A closed-form solution for the total subjects that can be recruited within a fixed time is derived. We also present a built-in Android system using Java for web browsers and mobile devices. Using the accrual software, we re-evaluated the Veteran Affairs Cooperative Studies Program 558- ROBOTICS study. The application of the software in monitoring and management of recruitment is illustrated for different stages of the trial. This developed accrual software provides a more convenient platform for estimation and prediction of the accrual process.

  14. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  15. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  16. A simple method of measuring tibial tubercle to trochlear groove distance on MRI: description of a novel and reliable technique.

    PubMed

    Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J

    2016-03-01

    Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.

  17. Inter- and Intrarater Reliability Using Different Software Versions of E4D Compare in Dental Education.

    PubMed

    Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W

    2015-06-01

    The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.

  18. NASA's Approach to Software Assurance

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2015-01-01

    NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.

  19. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    NASA Astrophysics Data System (ADS)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  20. How Nasa's Independent Verification and Validation (IVandV) Program Builds Reliability into a Space Mission Software System (SMSS)

    NASA Technical Reports Server (NTRS)

    Fisher, Marcus S.; Northey, Jeffrey; Stanton, William

    2014-01-01

    The purpose of this presentation is to outline how the NASA Independent Verification and Validation (IVV) Program helps to build reliability into the Space Mission Software Systems (SMSSs) that its customers develop.

  1. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    PubMed

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.

  2. Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement

    PubMed Central

    Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi

    2015-01-01

    Objectives: Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Materials and Methods: Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Results: Photographs were found to have high reliability coefficient (P > 0.05). Conclusion: The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement. PMID:26622272

  3. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  4. Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective

    NASA Technical Reports Server (NTRS)

    Reinholtz, Kirk

    2008-01-01

    This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.

  5. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  6. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  7. Methodology for Software Reliability Prediction. Volume 2.

    DTIC Science & Technology

    1987-11-01

    The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement

  8. A new software-based architecture for quantum computer

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2010-04-01

    In this paper, we study a reliable architecture of a quantum computer and a new instruction set and machine language for the architecture, which can improve the performance and reduce the cost of the quantum computing. We also try to address some key issues in detail in the software-driven universal quantum computers.

  9. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  10. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  11. Nasendoscopy: an analysis of measurement uncertainties.

    PubMed

    Gilleard, Onur; Sommerlad, Brian; Sell, Debbie; Ghanem, Ali; Birch, Malcolm

    2013-05-01

    Objective : The purpose of this study was to analyze the optical characteristics of two different nasendoscopes used to assess velopharyngeal insufficiency and to quantify the measurement uncertainties that will occur in a typical set of clinical data. Design : The magnification and barrel distortion associated with nasendoscopy was estimated by using computer software to analyze the apparent dimensions of a spatially calibrated test object at varying object-lens distances. In addition, a method of semiquantitative analysis of velopharyngeal closure using nasendoscopy and computer software is described. To calculate the reliability of this method, 10 nasendoscopy examinations were analyzed two times by three separate operators. The measure of intraoperator and interoperator agreement was evaluated using Pearson's r correlation coefficient. Results : Over an object lens distance of 9 mm, magnification caused the visualized dimensions of the test object to increase by 80%. In addition, dimensions of objects visualized in the far-peripheral field of the nasendoscopic examinations appeared approximately 40% smaller than those visualized in the central field. Using computer software to analyze velopharyngeal closure, the mean correlation coefficient for intrarater reliability was .94 and for interrater reliability was .90. Conclusion : Using a custom-designed apparatus, the effect object-lens distance has on the magnification of nasendoscopic images has been quantified. Barrel distortion has also been quantified and was found to be independent of object-lens distance. Using computer software to analyze clinical images, the intraoperator and interoperator correlation appears to show that ratio-metric measurements are reliable.

  12. Application of neural networks to software quality modeling of a very large telecommunications system.

    PubMed

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  13. Design and reliability analysis of DP-3 dynamic positioning control architecture

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru

    2011-12-01

    As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.

  14. Comparative test-retest reliability of metabolite values assessed with magnetic resonance spectroscopy of the brain. The LCModel versus the manufacturer software.

    PubMed

    Fayed, Nicolas; Modrego, Pedro J; Medrano, Jaime

    2009-06-01

    Reproducibility is an essential strength of any diagnostic technique for cross-sectional and longitudinal works. To determine in vivo short-term comparatively, the test-retest reliability of magnetic resonance spectroscopy (MRS) of the brain was compared using the manufacturer's software package and the widely used linear combination of model (LCModel) technique. Single-voxel H-MRS was performed in a series of patients with different pathologies on a 1.5 T clinical scanner. Four areas of the brain were explored with the point resolved spectroscopy technique acquisition mode; the echo time was 35 milliseconds and the repetition time was 2000 milliseconds. We enrolled 15 patients for every area, and the intra-individual variations of metabolites were studied in two consecutive scans without removing the patient from the scanner. Curve fitting and analysis of metabolites were made with the software of GE and the LCModel. Spectra non-fulfilling the minimum criteria of quality in relation to linewidths and signal/noise ratio were rejected. The intraclass correlation coefficients for the N-acetylaspartate/creatine (NAA/Cr) ratios were 0.93, 0.89, 0.9 and 0.8 for the posterior cingulate gyrus, occipital, prefrontal and temporal regions, respectively, with the GE software. For the LCModel, the coefficients were 0.9, 0.89, 0.87 and 0.84, respectively. For the absolute value of NAA, the GE software was also slightly more reproducible than LCModel. However, for the choline/Cr and myo-inositol/Cr ratios, the LCModel was more reliable than the GE software. The variability we have seen hovers around the percentages observed in previous reports (around 10% for the NAA/Cr ratios). We did not find that the LCModel software is superior to the software of the manufacturer. Reproducibility of metabolite values relies more on the observance of the quality parameters than on the software used.

  15. Reliability and reproducibility analysis of the Cobb angle and assessing sagittal plane by computer-assisted and manual measurement tools.

    PubMed

    Wu, Weifei; Liang, Jie; Du, Yuanli; Tan, Xiaoyi; Xiang, Xuanping; Wang, Wanhong; Ru, Neng; Le, Jinbo

    2014-02-06

    Although many studies on reliability and reproducibility of measurement have been performed on coronal Cobb angle, few results about reliability and reproducibility are reported on sagittal alignment measurement including the pelvis. We usually use SurgimapSpine software to measure the Cobb angle in our studies; however, there are no reports till date on its reliability and reproducible measurements. Sixty-eight standard standing posteroanterior whole-spine radiographs were reviewed. Three examiners carried out the measurements independently under the settings of manual measurement on X-ray radiographies and SurgimapSpine software on the computer. Parameters measured included pelvic incidence, sacral slope, pelvic tilt, Lumbar lordosis (LL), thoracic kyphosis, and coronal Cobb angle. SPSS 16.0 software was used for statistical analyses. The means, standard deviations, intraclass and interclass correlation coefficient (ICC), and 95% confidence intervals (CI) were calculated. There was no notable difference between the two tools (P = 0.21) for the coronal Cobb angle. In the sagittal plane parameters, the ICC of intraobserver reliability for the manual measures varied from 0.65 (T2-T5 angle) to 0.95 (LL angle). Further, for SurgimapSpine tool, the ICC ranged from 0.75 to 0.98. No significant difference in intraobserver reliability was found between the two measurements (P > 0.05). As for the interobserver reliability, measurements with SurgimapSpine tool had better ICC (0.71 to 0.98 vs 0.59 to 0.96) and Pearson's coefficient (0.76 to 0.99 vs 0.60 to 0.97). The reliability of SurgimapSpine measures was significantly higher in all parameters except for the coronal Cobb angle where the difference was not significant (P > 0.05). Although the differences between the two methods are very small, the results of this study indicate that the SurgimapSpine measurement is an equivalent measuring tool to the traditional manual in coronal Cobb angle, but is advantageous in spino-pelvic measurement in T2-T5, PT, PI, SS, and LL.

  16. Reliability and reproducibility analysis of the Cobb angle and assessing sagittal plane by computer-assisted and manual measurement tools

    PubMed Central

    2014-01-01

    Background Although many studies on reliability and reproducibility of measurement have been performed on coronal Cobb angle, few results about reliability and reproducibility are reported on sagittal alignment measurement including the pelvis. We usually use SurgimapSpine software to measure the Cobb angle in our studies; however, there are no reports till date on its reliability and reproducible measurements. Methods Sixty-eight standard standing posteroanterior whole-spine radiographs were reviewed. Three examiners carried out the measurements independently under the settings of manual measurement on X-ray radiographies and SurgimapSpine software on the computer. Parameters measured included pelvic incidence, sacral slope, pelvic tilt, Lumbar lordosis (LL), thoracic kyphosis, and coronal Cobb angle. SPSS 16.0 software was used for statistical analyses. The means, standard deviations, intraclass and interclass correlation coefficient (ICC), and 95% confidence intervals (CI) were calculated. Results There was no notable difference between the two tools (P = 0.21) for the coronal Cobb angle. In the sagittal plane parameters, the ICC of intraobserver reliability for the manual measures varied from 0.65 (T2–T5 angle) to 0.95 (LL angle). Further, for SurgimapSpine tool, the ICC ranged from 0.75 to 0.98. No significant difference in intraobserver reliability was found between the two measurements (P > 0.05). As for the interobserver reliability, measurements with SurgimapSpine tool had better ICC (0.71 to 0.98 vs 0.59 to 0.96) and Pearson’s coefficient (0.76 to 0.99 vs 0.60 to 0.97). The reliability of SurgimapSpine measures was significantly higher in all parameters except for the coronal Cobb angle where the difference was not significant (P > 0.05). Conclusion Although the differences between the two methods are very small, the results of this study indicate that the SurgimapSpine measurement is an equivalent measuring tool to the traditional manual in coronal Cobb angle, but is advantageous in spino-pelvic measurement in T2-T5, PT, PI, SS, and LL. PMID:24502397

  17. Development of a Standard Set of Software Indicators for Aeronautical Systems Center.

    DTIC Science & Technology

    1992-09-01

    29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class

  18. On the use and the performance of software reliability growth models

    NASA Technical Reports Server (NTRS)

    Keiller, Peter A.; Miller, Douglas R.

    1991-01-01

    We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.

  19. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  20. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  1. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    DTIC Science & Technology

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  2. A Statistical Testing Approach for Quantifying Software Reliability; Application to an Example System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok

    The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less

  3. [Evaluation of Web-based software applications for administrating and organising an ophthalmological clinical trial site].

    PubMed

    Kortüm, K; Reznicek, L; Leicht, S; Ulbig, M; Wolf, A

    2013-07-01

    The importance and complexity of clinical trials is continuously increasing, especially in innovative specialties like ophthalmology. Therefore an efficient clinical trial site organisational structure is essential. In modern internet times, this can be accomplished by web-based applications. In total, 3 software applications (Vibe on Prem, Sharepoint and open source software) were evaluated in a clinical trial site in ophthalmology. Assessment criteria were set; they were: reliability, easiness of administration, usability, scheduling, task list, knowledge management, operating costs and worldwide availability. Vibe on Prem customised by the local university met the assessment criteria best. Other applications were not as strong. By introducing a web-based application for administrating and organising an ophthalmological trial site, studies can be conducted in a more efficient and reliable manner. Georg Thieme Verlag KG Stuttgart · New York.

  4. Software development for safety-critical medical applications

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1992-01-01

    There are many computer-based medical applications in which safety and not reliability is the overriding concern. Reduced, altered, or no functionality of such systems is acceptable as long as no harm is done. A precise, formal definition of what software safety means is essential, however, before any attempt can be made to achieve it. Without this definition, it is not possible to determine whether a specific software entity is safe. A set of definitions pertaining to software safety will be presented and a case study involving an experimental medical device will be described. Some new techniques aimed at improving software safety will also be discussed.

  5. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  6. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  7. Space Shuttle Software Development and Certification

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Henderson, Johnnie A

    2000-01-01

    Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools

  8. BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.

    PubMed

    Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P

    2016-03-01

    The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  9. Software service history report

    DOT National Transportation Integrated Search

    2002-01-01

    The safe and reliable operation of software within civil aviation systems and equipment has historically been assured through the application of rigorous design assurance applied during the software development process. Increasingly, manufacturers ar...

  10. Reliability, Safety and Error Recovery for Advanced Control Software

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2003-01-01

    For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.

  11. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  12. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  13. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  14. Overview of the SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division Activities and Technical Projects

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.

  15. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  16. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  17. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  18. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  19. Effective Software Engineering Leadership for Development Programs

    ERIC Educational Resources Information Center

    Cagle West, Marsha

    2010-01-01

    Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…

  20. Reliability of a Single Light Source Purkinjemeter in Pseudophakic Eyes.

    PubMed

    Janunts, Edgar; Chashchina, Ekaterina; Seitz, Berthold; Schaeffel, Frank; Langenbucher, Achim

    2015-08-01

    To study the reliability of Purkinje image analysis for assessment of intraocular lens tilt and decentration in pseudophakic eyes. The study comprised 64 eyes of 39 patients. All eyes underwent phacoemulsification with intraocular lens implanted in the capsular bag. Lens decentration and tilt were measured multiple times by an infrared Purkinjemeter. A total of 396 measurements were performed 1 week and 1 month postoperatively. Lens tilt (Tx, Ty) and decentration (Dx, Dy) in horizontal and vertical directions, respectively, were calculated by dedicated software based on regression analysis for each measurement using only four images, and afterward, the data were averaged (mean values, MV) for repeated sequence of measurements. New software was designed by us for recalculating lens misalignment parameters offline, using a complete set of Purkinje images obtained through the repeated measurements (9 to 15 Purkinje images) (recalculated values, MV'). MV and MV' were compared using SPSS statistical software package. MV and MV' were found to be highly correlated for the Tx and Ty parameters (R2 > 0.9; p < 0.001), moderately correlated for the Dx parameter (R2 > 0.7; p < 0.001), and weakly correlated for the Dy parameter (R2 = 0.23; p < 0.05). Reliability was high (Cronbach α > 0.9) for all measured parameters. Standard deviation values were 0.86 ± 0.69 degrees, 0.72 ± 0.65 degrees, 0.04 ± 0.05 mm, and 0.23 ± 0.34 mm for Tx, Ty, Dx, and Dy, respectively. The Purkinjemeter demonstrated high reliability and reproducibility for lens misalignment parameters. To further improve reliability, we recommend capturing at least six Purkinje images instead of three.

  1. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  2. A Bayesian modification to the Jelinski-Moranda software reliability growth model

    NASA Technical Reports Server (NTRS)

    Littlewood, B.; Sofer, A.

    1983-01-01

    The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.

  3. Automated software for analysis of ciliary beat frequency and metachronal wave orientation in primary ciliary dyskinesia.

    PubMed

    Mantovani, Giulia; Pifferi, Massimo; Vozzi, Giovanni

    2010-06-01

    Patients with primary ciliary dyskinesia (PCD) have structural and/or functional alterations of cilia that imply deficits in mucociliary clearance and different respiratory pathologies. A useful indicator for the difficult diagnosis is the ciliary beat frequency (CBF) that is significantly lower in pathological cases than in physiological ones. The CBF computation is not rapid, therefore, the aim of this study is to propose an automated method to evaluate it directly from videos of ciliated cells. The cells are taken from inferior nasal turbinates and videos of ciliary movements are registered and eventually processed by the developed software. The software consists in the extraction of features from videos (written with C++ language) and the computation of the frequency (written with Matlab language). This system was tested both on the samples of nasal cavity and software models, and the results were really promising because in a few seconds, it can compute a reliable frequency if compared with that measured with visual methods. It is to be noticed that the reliability of the computation increases with the quality of acquisition system and especially with the sampling frequency. It is concluded that the developed software could be a useful mean for PCD diagnosis.

  4. 3D photography is a reliable method of measuring infantile haemangioma volume over time.

    PubMed

    Robertson, Sarah A; Kimble, Roy M; Storey, Kristen J; Gee Kee, Emma L; Stockton, Kellie A

    2016-09-01

    Infantile haemangiomas are common lesions of infancy. With the development of novel treatments utilised to accelerate their regression, there is a need for a method of assessing these lesions over time. Volume is an ideal assessment method because of its quantifiable nature. This study investigated whether 3D photography is a valid tool for measuring the volume of infantile haemangiomas over time. Thirteen children with infantile haemangiomas presenting to the Vascular Anomalies Clinic, Royal Children's Hospital/Lady Cilento Children's Hospital treated with propranolol were included in the study. Lesion volume was assessed using 3D photography at presentation, one month and three months follow up. Intrarater reliability was determined by retracing all images several months after the initial mapping. Interrater reliability of the 3D camera software was determined by two investigators, blinded to each other's results, independently assessing infantile haemangioma volume. Lesion volume decreased significantly between presentation and three-month follow-up (p<0.001). Volume intra- and interrater reliability were excellent with ICC 0.991 (95% CI 0.982, 0.995) and 0.978 (95% CI 0.955, 0.989), respectively. This study demonstrates images taken with the 3D LifeViz™ camera and lesion volume calculated with Dermapix® software is a reliable method for assessing infantile haemangioma volume over time. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    NASA Astrophysics Data System (ADS)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  6. Software Carpentry and the Hydrological Sciences

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.; Farthing, M. W.

    2013-12-01

    Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice based on the following principles: 1. Write programs for people, not computers. 2. Automate repetitive tasks 3. Use the computer to record history 4. Make incremental changes 5. Use version control 6. Don't repeat yourself (or others) 7. Plan for mistakes 8. Optimize software only after it works 9. Document design and purpose, not mechanics 10. Collaborate We discuss how these best practices, arising from solid foundations in research and experience, have been shown to help improve scientist's productivity and the reliability of their software.

  7. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  8. Assessing Survivability Using Software Fault Injection

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10875 TITLE: Assessing Survivability Using Software Fault Injection...Esc to exit .......................................................................... = 11-1 Assessing Survivability Using Software Fault Injection...Jeffrey Voas Reliable Software Technologies 21351 Ridgetop Circle, #400 Dulles, VA 20166 jmvoas@rstcorp.crom Abstract approved sources have the

  9. Framework for Small-Scale Experiments in Software Engineering: Guidance and Control Software Project: Software Engineering Case Study

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1998-01-01

    Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.

  10. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  11. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  12. Software IV and V Research Priorities and Applied Program Accomplishments Within NASA

    NASA Technical Reports Server (NTRS)

    Blazy, Louis J.

    2000-01-01

    The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering

  13. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  14. On Quality and Measures in Software Engineering

    ERIC Educational Resources Information Center

    Bucur, Ion I.

    2006-01-01

    Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…

  15. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.

  16. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  17. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  18. A specialized plug-in software module for computer-aided quantitative measurement of medical images.

    PubMed

    Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H

    2003-12-01

    This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.

  19. The impact of software quality characteristics on healthcare outcome: a literature review.

    PubMed

    Aghazadeh, Sakineh; Pirnejad, Habibollah; Moradkhani, Alireza; Aliev, Alvosat

    2014-01-01

    The aim of this study was to discover the effect of software quality characteristics on healthcare quality and efficiency indicators. Through a systematic literature review, we selected and analyzed 37 original research papers to investigate the impact of the software indicators (coming from the standard ISO 9126 quality characteristics and sub-characteristics) on some of healthcare important outcome indicators and finally ranked these software indicators. The results showed that the software characteristics usability, reliability and efficiency were mostly favored in the studies, indicating their importance. On the other hand, user satisfaction, quality of patient care, clinical workflow efficiency, providers' communication and information exchange, patient satisfaction and care costs were among the healthcare outcome indicators frequently evaluated in relation to the mentioned software characteristics. Regression Logistic Method was the most common assessment methodology, and Confirmatory Factor Analysis and Structural Equation Modeling were performed to test the structural model's fit. The software characteristics were considered to impact the healthcare outcome indicators through other intermediate factors (variables).

  20. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging.

    PubMed

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M

    2008-01-01

    To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.

  1. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582

  2. The cost of software fault tolerance

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1982-01-01

    The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.

  3. A Compatible Hardware/Software Reliability Prediction Model.

    DTIC Science & Technology

    1981-07-22

    machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed

  4. FloWave.US: validated, open-source, and flexible software for ultrasound blood flow analysis.

    PubMed

    Coolbaugh, Crystal L; Bush, Emily C; Caskey, Charles F; Damon, Bruce M; Towse, Theodore F

    2016-10-01

    Automated software improves the accuracy and reliability of blood velocity, vessel diameter, blood flow, and shear rate ultrasound measurements, but existing software offers limited flexibility to customize and validate analyses. We developed FloWave.US-open-source software to automate ultrasound blood flow analysis-and demonstrated the validity of its blood velocity (aggregate relative error, 4.32%) and vessel diameter (0.31%) measures with a skeletal muscle ultrasound flow phantom. Compared with a commercial, manual analysis software program, FloWave.US produced equivalent in vivo cardiac cycle time-averaged mean (TAMean) velocities at rest and following a 10-s muscle contraction (mean bias <1 pixel for both conditions). Automated analysis of ultrasound blood flow data was 9.8 times faster than the manual method. Finally, a case study of a lower extremity muscle contraction experiment highlighted the ability of FloWave.US to measure small fluctuations in TAMean velocity, vessel diameter, and mean blood flow at specific time points in the cardiac cycle. In summary, the collective features of our newly designed software-accuracy, reliability, reduced processing time, cost-effectiveness, and flexibility-offer advantages over existing proprietary options. Further, public distribution of FloWave.US allows researchers to easily access and customize code to adapt ultrasound blood flow analysis to a variety of vascular physiology applications. Copyright © 2016 the American Physiological Society.

  5. Increasing the reliability of ecological models using modern software engineering techniques

    Treesearch

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  6. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    NASA Technical Reports Server (NTRS)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  7. [Computer assisted application of mandarin speech test materials].

    PubMed

    Zhang, Hua; Wang, Shuo; Chen, Jing; Deng, Jun-Min; Yang, Xiao-Lin; Guo, Lian-Sheng; Zhao, Xiao-Yan; Shao, Guang-Yu; Han, De-Min

    2008-06-01

    To design an intelligent speech test system with reliability and convenience using the computer software and to evaluate this system. First, the intelligent system was designed by the Delphi program language. Second, the seven monosyllabic word lists recorded on CD were separated by Cool Edit Pro v2.1 software and put into the system as test materials. Finally, the intelligent system was used to evaluate the equivalence of difficulty between seven lists. Fifty-five college students with normal hearing participated in the study. The seven monosyllabic word lists had equivalent difficulty (F = 1.582, P > 0.05) to the subjects between each other and the system was proved as reliability and convenience. The intelligent system has the feasibility in the clinical practice.

  8. Stereoelectroencephalography based on the Leksell stereotactic frame and Neurotech operation planning software.

    PubMed

    Zhang, Guangming; Chen, Guoqiang; Meng, Dawei; Liu, Yanwu; Chen, Jianwei; Shu, Lanmei; Liu, Wenbo

    2017-06-01

    This study aimed to introduce a new stereoelectroencephalography (SEEG) system based on Leksell stereotactic frame (L-SEEG) as well as Neurotech operation planning software, and to investigate its safety, applicability, and reliability.L-SEEG, without the help of navigation, includes SEEG operation planning software (Neurotech), Leksell stereotactic frame, and corresponding surgical instruments. Neurotech operation planning software can be used to display three-dimensional images of the cortex and cortical vessels and to plan the intracranial electrode implantation. In 44 refractory epilepsy patients, 364 intracranial electrodes were implanted through the L-SEEG system, and the postoperative complications such as bleeding, cerebral spinal fluid (CSF) leakage, infection, and electrode-related problems were also investigated.All electrodes were implanted accurately as preoperatively planned shown by postoperative lamina computed tomography and preoperative lamina magnetic resonance imaging. There was no severe complication after intracranial electrode implantation through the L-SEEG system. There were no electrode-related problems, no CSF leakage and no infection after surgery. All the patients recovered favorably after SEEG electrode implantation, and only 1 patient had asymptomatic frontal lateral ventricle hematoma (3 mL).The L-SEEG system with Neurotech operation planning software can be used for safe, accurate, and reliable intracranial electrode implantation for SEEG.

  9. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Data systems and computer science: Software Engineering Program

    NASA Technical Reports Server (NTRS)

    Zygielbaum, Arthur I.

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.

  11. [Quality assurance of the renal applications software].

    PubMed

    del Real Núñez, R; Contreras Puertas, P I; Moreno Ortega, E; Mena Bares, L M; Maza Muret, F R; Latre Romero, J M

    2007-01-01

    The need for quality assurance of all technical aspects of nuclear medicine studies is widely recognised. However, little attention has been paid to the quality assurance of the applications software. Our work reported here aims at verifying the analysis software for processing of renal nuclear medicine studies (renograms). The software tools were used to build a synthetic dynamic model of renal system. The model consists of two phases: perfusion and function. The organs of interest (kidneys, bladder and aortic artery) were simple geometric forms. The uptake of the renal structures was described by mathematic functions. Curves corresponding to normal or pathological conditions were simulated for kidneys, bladder and aortic artery by appropriate selection of parameters. There was no difference between the parameters of the mathematic curves and the quantitative data produced by the renal analysis program. Our test procedure is simple to apply, reliable, reproducible and rapid to verify the renal applications software.

  12. An empirical evaluation of software quality assurance practices and challenges in a developing country: a comparison of Nigeria and Turkey.

    PubMed

    Sowunmi, Olaperi Yeside; Misra, Sanjay; Fernandez-Sanz, Luis; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    The importance of quality assurance in the software development process cannot be overemphasized because its adoption results in high reliability and easy maintenance of the software system and other software products. Software quality assurance includes different activities such as quality control, quality management, quality standards, quality planning, process standardization and improvement amongst others. The aim of this work is to further investigate the software quality assurance practices of practitioners in Nigeria. While our previous work covered areas on quality planning, adherence to standardized processes and the inherent challenges, this work has been extended to include quality control, software process improvement and international quality standard organization membership. It also makes comparison based on a similar study carried out in Turkey. The goal is to generate more robust findings that can properly support decision making by the software community. The qualitative research approach, specifically, the use of questionnaire research instruments was applied to acquire data from software practitioners. In addition to the previous results, it was observed that quality assurance practices are quite neglected and this can be the cause of low patronage. Moreover, software practitioners are neither aware of international standards organizations or the required process improvement techniques; as such their claimed standards are not aligned to those of accredited bodies, and are only limited to their local experience and knowledge, which makes it questionable. The comparison with Turkey also yielded similar findings, making the results typical of developing countries. The research instrument used was tested for internal consistency using the Cronbach's alpha, and it was proved reliable. For the software industry in developing countries to grow strong and be a viable source of external revenue, software assurance practices have to be taken seriously because its effect is evident in the final product. Moreover, quality frameworks and tools which require minimum time and cost are highly needed in these countries.

  13. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  14. Cephalometric and three-dimensional assessment of the posterior airway space and imaging software reliability analysis before and after orthognathic surgery.

    PubMed

    Burkhard, John Patrik Matthias; Dietrich, Ariella Denise; Jacobsen, Christine; Roos, Malgorzota; Lübbers, Heinz-Theo; Obwegeser, Joachim Anton

    2014-10-01

    This study aimed to compare the reliability of three different imaging software programs for measuring the PAS and concurrently to investigate the morphological changes in oropharyngeal structures in mandibular prognathic patients before and after orthognathic surgery by using 2D and 3D analyzing technique. The study consists of 11 randomly chosen patients (8 females and 3 males) who underwent maxillomandibular treatment for correction of Class III anteroposterior mandibular prognathism at the University Hospital in Zurich. A set of standardized LCR and CBCT-scans were obtained from each subject preoperatively (T0), 3 months after surgery (T1) and 3 months to 2 years postoperatively (T2). Morphological changes in the posterior airway space (PAS) were evaluated longitudinally by two different observers with three different imaging software programs (OsiriX(®) 64-bit, Switzerland; Mimics(®), Belgium; BrainLab(®), Germany) and manually by analyzing cephalometric X-rays. A significant increase in the upper airway dimensions before and after surgery occurred in all measured cases. All other cephalometric distances showed no statistically significant alterations. Measuring the volume of the PAS showed no significant changes in all cases. All three software programs showed similar outputs in both cephalometric analysis and 3D measuring technique. A 3D design of the posterior airway seems to be far more reliable and precise phrasing of a statement of postoperative gradients than conventional radiography and is additionally higher compared to the corresponding manual method. In case of Class III mandibular prognathism treatment with bilateral split osteotomy of the mandible and simultaneous maxillary advancement, the negative effects of PAS volume decrease may be reduced and might prevent a developing OSAS. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. The Domain-Specific Software Architecture Program

    DTIC Science & Technology

    1992-06-01

    Kang, K.C; Cohen, S.C: Jess, J.A; Novak, W.E; Peterson, A.S. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study. (CMU/SEI-90-TR-21, ADA235785...perspective of a con- trols engineer solving a problem using an iterative process of simulation and analysis . The CMU/SEI-92-SR-9 1 I ~math AnalysislP...for schedulability analysis and Markov processes for the determination of reliability. Software architectures are derived from these formal models. ORA

  16. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  17. 3D photography is as accurate as digital planimetry tracing in determining burn wound area.

    PubMed

    Stockton, K A; McMillan, C M; Storey, K J; David, M C; Kimble, R M

    2015-02-01

    In the paediatric population careful attention needs to be made concerning techniques utilised for wound assessment to minimise discomfort and stress to the child. To investigate whether 3D photography is a valid measure of burn wound area in children compared to the current clinical gold standard method of digital planimetry using Visitrak™. Twenty-five children presenting to the Stuart Pegg Paediatric Burn Centre for burn dressing change following acute burn injury were included in the study. Burn wound area measurement was undertaken using both digital planimetry (Visitrak™ system) and 3D camera analysis. Inter-rater reliability of the 3D camera software was determined by three investigators independently assessing the burn wound area. A comparison of wound area was assessed using intraclass correlation co-efficients (ICC) which demonstrated excellent agreement 0.994 (CI 0.986, 0.997). Inter-rater reliability measured using ICC 0.989 (95% CI 0.979, 0.995) demonstrated excellent inter-rater reliability. Time taken to map the wound was significantly quicker using the camera at bedside compared to Visitrak™ 14.68 (7.00)s versus 36.84 (23.51)s (p<0.001). In contrast, analysing wound area was significantly quicker using the Visitrak™ tablet compared to Dermapix(®) software for the 3D Images 31.36 (19.67)s versus 179.48 (56.86)s (p<0.001). This study demonstrates that images taken with the 3D LifeViz™ camera and assessed with Dermapix(®) software is a reliable method for wound area assessment in the acute paediatric burn setting. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  18. Optimized Biasing of Pump Laser Diodes in a Highly Reliable Metrology Source for Long-Duration Space Missions

    NASA Technical Reports Server (NTRS)

    Poberezhskiy, Ilya; Chang, Daniel; Erlig, Hernan

    2011-01-01

    Non Planar Ring Oscillator (NPRO) lasers are highly attractive for metrology applications. NPRO reliability for prolonged space missions is limited by reliability of 808 nm pump diodes. Combined laser farm aging parameter allows comparing different bias approaches. Monte-Carlo software developed to calculate the reliability of laser pump architecture, perform parameter sensitivity studies To meet stringent Space Interferometry Mission (SIM) Lite lifetime reliability / output power requirements, we developed a single-mode Laser Pump Module architecture that: (1) provides 2 W of power at 808 nm with >99.7% reliability for 5.5 years (2) consists of 37 de-rated diode lasers operating at -5C, with outputs combined in a very low loss 37x1 all-fiber coupler

  19. Big Software for SmallSats: Adapting cFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.

  20. Development of an Environment for Software Reliability Model Selection

    DTIC Science & Technology

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  1. A Course in Real-Time Embedded Software

    ERIC Educational Resources Information Center

    Archibald, J. K.; Fife, W. S.

    2007-01-01

    Embedded systems are increasingly pervasive, and the creation of reliable controlling software offers unique challenges. Embedded software must interact directly with hardware, it must respond to events in a time-critical fashion, and it typically employs concurrency to meet response time requirements. This paper describes an innovative course…

  2. Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures

    DTIC Science & Technology

    2017-12-01

    This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the

  3. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  4. The Future of Statistical Software. Proceedings of a Forum--Panel on Guidelines for Statistical Software (Washington, D.C., February 22, 1991).

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC.

    The Panel on Guidelines for Statistical Software was organized in 1990 to document, assess, and prioritize problem areas regarding quality and reliability of statistical software; present prototype guidelines in high priority areas; and make recommendations for further research and discussion. This document provides the following papers presented…

  5. Research of real-time communication software

    NASA Astrophysics Data System (ADS)

    Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong

    2003-11-01

    Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.

  6. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction.

    PubMed

    Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia

    2016-06-01

    To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  8. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  9. Three-dimensional facial anthropometry of unilateral cleft lip infants with a structured light scanning system.

    PubMed

    Li, Guanghui; Wei, Jianhua; Wang, Xi; Wu, Guofeng; Ma, Dandan; Wang, Bo; Liu, Yanpu; Feng, Xinghua

    2013-08-01

    Cleft lip in the presence or absence of a cleft palate is a major public health problem. However, few studies have been published concerning the soft-tissue morphology of cleft lip infants. Currently, obtaining reliable three-dimensional (3D) surface models of infants remains a challenge. The aim of this study was to investigate a new way of capturing 3D images of cleft lip infants using a structured light scanning system. In addition, the accuracy and precision of the acquired facial 3D data were validated and compared with direct measurements. Ten unilateral cleft lip patients were enrolled in the study. Briefly, 3D facial images of the patients were acquired using a 3D scanner device before and after the surgery. Fourteen items were measured by direct anthropometry and 3D image software. The accuracy and precision of the 3D system were assessed by comparative analysis. The anthropometric data obtained using the 3D method were in agreement with the direct anthropometry measurements. All data calculated by the software were 'highly reliable' or 'reliable', as defined in the literature. The localisation of four landmarks was not consistent in repeated experiments of inter-observer reliability in preoperative images (P<0.05), while the intra-observer reliability in both pre- and postoperative images was good (P>0.05). The structured light scanning system is proven to be a non-invasive, accurate and precise method in cleft lip anthropometry. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.

    PubMed

    Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans

    2016-01-01

    Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. © 2014 World Institute of Pain.

  11. Final Technical Report on Quantifying Dependability Attributes of Software Based Safety Critical Instrumentation and Control Systems in Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smidts, Carol; Huang, Funqun; Li, Boyuan

    With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less

  12. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  13. Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Gil, Joon-Min

    2015-03-01

    The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.

  14. [Study for lung sound acquisition module based on ARM and Linux].

    PubMed

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  15. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  16. PD5: a general purpose library for primer design software.

    PubMed

    Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda

    2013-01-01

    Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.

  17. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  18. The Software Management Environment (SME)

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.; Decker, William; Buell, John

    1988-01-01

    The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.

  19. Effects of Medical Device Regulations on the Development of Stand-Alone Medical Software: A Pilot Study.

    PubMed

    Blagec, Kathrin; Jungwirth, David; Haluza, Daniela; Samwald, Matthias

    2018-01-01

    Medical device regulations which aim to ensure safety standards do not only apply to hardware devices but also to standalone medical software, e.g. mobile apps. To explore the effects of these regulations on the development and distribution of medical standalone software. We invited a convenience sample of 130 domain experts to participate in an online survey about the impact of current regulations on the development and distribution of medical standalone software. 21 respondents completed the questionnaire. Participants reported slight positive effects on usability, reliability, and data security of their products, whereas the ability to modify already deployed software and customization by end users were negatively impacted. The additional time and costs needed to go through the regulatory process were perceived as the greatest obstacles in developing and distributing medical software. Further research is needed to compare positive effects on software quality with negative impacts on market access and innovation. Strategies for avoiding over-regulation while still ensuring safety standards need to be devised.

  20. Software Technology for Adaptable, Reliable Systems (STARS)

    DTIC Science & Technology

    1994-03-25

    Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2

  1. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  2. 75 FR 4375 - Transmission Loading Relief Reliability Standard and Curtailment Priorities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-27

    ... Site: http://www.ferc.gov . Documents created electronically using word processing software should be... ensure operation within acceptable reliability criteria. NERC Glossary of Terms Used in Reliability Standards at 19, available at http://www.nerc.com/files/Glossary_12Feb08.pdf (NERC Glossary). An...

  3. CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 2

    DTIC Science & Technology

    2008-02-01

    data systems are select- ed as tools to provide the data. For example, the Air Force decided to initiate a Reliability Pathfinder to study and define...small projects, and someone just sitting will certainly be noticed more readily. Providing tools for communication such as white boards and open space...community may have to pay for both. In the case of DRILS, we saw that it could potentially interface with the Reliability and Maintainability Infor- mation

  4. A study of discrete control signal fault conditions in the shuttle DPS

    NASA Technical Reports Server (NTRS)

    Reddi, S. S.; Retter, C. T.

    1976-01-01

    An analysis of the effects of discrete failures on the data processing subsystem is presented. A functional description of each discrete together with a list of software modules that use this discrete are included. A qualitative description of the consequences that may ensue due to discrete failures is given followed by a probabilistic reliability analysis of the data processing subsystem. Based on the investigation conducted, recommendations were made to improve the reliability of the subsystem.

  5. Improvement of Computer Software Quality through Software Automated Tools.

    DTIC Science & Technology

    1986-08-31

    requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry

  6. Assuring Software Reliability

    DTIC Science & Technology

    2014-08-01

    technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had

  7. The research and practice of spacecraft software engineering

    NASA Astrophysics Data System (ADS)

    Chen, Chengxin; Wang, Jinghua; Xu, Xiaoguang

    2017-06-01

    In order to ensure the safety and reliability of spacecraft software products, it is necessary to execute engineering management. Firstly, the paper introduces the problems of unsystematic planning, uncertain classified management and uncontinuous improved mechanism in domestic and foreign spacecraft software engineering management. Then, it proposes a solution for software engineering management based on system-integrated ideology in the perspective of spacecraft system. Finally, a application result of spacecraft is given as an example. The research can provides a reference for executing spacecraft software engineering management and improving software product quality.

  8. The analysis of the statistical and historical information gathered during the development of the Shuttle Orbiter Primary Flight Software

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Marchbanks, M. P., Jr.; Quick, M. J.

    1982-01-01

    The results of an effort to thoroughly and objectively analyze the statistical and historical information gathered during the development of the Shuttle Orbiter Primary Flight Software are given. The particular areas of interest include cost of the software, reliability of the software, requirements for the software and how the requirements changed during development of the system. Data related to the current version of the software system produced some interesting results. Suggestions are made for the saving of additional data which will allow additional investigation.

  9. The reliable multicast protocol application programming interface

    NASA Technical Reports Server (NTRS)

    Montgomery , Todd; Whetten, Brian

    1995-01-01

    The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.

  10. SenseMyHeart: A cloud service and API for wearable heart monitors.

    PubMed

    Pinto Silva, P M; Silva Cunha, J P

    2015-01-01

    In the era of ubiquitous computing, the growing adoption of wearable systems and body sensor networks is trailing the path for new research and software for cardiovascular intensity, energy expenditure and stress and fatigue detection through cardiovascular monitoring. Several systems have received clinical-certification and provide huge amounts of reliable heart-related data in a continuous basis. PhysioNet provides equally reliable open-source software tools for ECG processing and analysis that can be combined with these devices. However, this software remains difficult to use in a mobile environment and for researchers unfamiliar with Linux-based systems. In the present paper we present an approach that aims at tackling these limitations by developing a cloud service that provides an API for a PhysioNet-based pipeline for ECG processing and Heart Rate Variability measurement. We describe the proposed solution, along with its advantages and tradeoffs. We also present some client tools (windows and Android) and several projects where the developed cloud service has been used successfully as a standard for Heart Rate and Heart Rate Variability studies in different scenarios.

  11. Open source posturography.

    PubMed

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  12. The Stability and Validity of Automated Vocal Analysis in Preverbal Preschoolers With Autism Spectrum Disorder

    PubMed Central

    Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-01-01

    Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107

  13. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  14. Building a Library Web Server on a Budget.

    ERIC Educational Resources Information Center

    Orr, Giles

    1998-01-01

    Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)

  15. Enhanced CARES Software Enables Improved Ceramic Life Prediction

    NASA Technical Reports Server (NTRS)

    Janosik, Lesley A.

    1997-01-01

    The NASA Lewis Research Center has developed award-winning software that enables American industry to establish the reliability and life of brittle material (e.g., ceramic, intermetallic, graphite) structures in a wide variety of 21st century applications. The CARES (Ceramics Analysis and Reliability Evaluation of Structures) series of software is successfully used by numerous engineers in industrial, academic, and government organizations as an essential element of the structural design and material selection processes. The latest version of this software, CARES/Life, provides a general- purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. CARES/Life was recently enhanced by adding new modules designed to improve functionality and user-friendliness. In addition, a beta version of the newly-developed CARES/Creep program (for determining the creep life of monolithic ceramic components) has just been released to selected organizations.

  16. Improving the Effectiveness of Program Managers

    DTIC Science & Technology

    2006-05-03

    Improving the Effectiveness of Program Managers Systems and Software Technology Conference Salt Lake City, Utah May 3, 2006 Presented by GAO’s...Companies’ best practices Motorola Caterpillar Toyota FedEx NCR Teradata Boeing Hughes Space and Communications Disciplined software and management...and total ownership costs Collection of metrics data to improve software reliability Technology readiness levels and design maturity Statistical

  17. S2O - A software tool for integrating research data from general purpose statistic software into electronic data capture systems.

    PubMed

    Bruland, Philipp; Dugas, Martin

    2017-01-07

    Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.

  18. Software Metrics

    DTIC Science & Technology

    1988-12-01

    software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value

  19. Identification of Shiga-Toxigenic Escherichia coli outbreak isolates by a novel data analysis tool after matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Christner, Martin; Dressler, Dirk; Andrian, Mark; Reule, Claudia; Petrini, Orlando

    2017-01-01

    The fast and reliable characterization of bacterial and fungal pathogens plays an important role in infectious disease control and tracking of outbreak agents. DNA based methods are the gold standard for epidemiological investigations, but they are still comparatively expensive and time-consuming. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a fast, reliable and cost-effective technique now routinely used to identify clinically relevant human pathogens. It has been used for subspecies differentiation and typing, but its use for epidemiological tasks, e. g. for outbreak investigations, is often hampered by the complexity of data analysis. We have analysed publicly available MALDI-TOF mass spectra from a large outbreak of Shiga-Toxigenic Escherichia coli in northern Germany using a general purpose software tool for the analysis of complex biological data. The software was challenged with depauperate spectra and reduced learning group sizes to mimic poor spectrum quality and scarcity of reference spectra at the onset of an outbreak. With high quality formic acid extraction spectra, the software's built in classifier accurately identified outbreak related strains using as few as 10 reference spectra (99.8% sensitivity, 98.0% specificity). Selective variation of processing parameters showed impaired marker peak detection and reduced classification accuracy in samples with high background noise or artificially reduced peak counts. However, the software consistently identified mass signals suitable for a highly reliable marker peak based classification approach (100% sensitivity, 99.5% specificity) even from low quality direct deposition spectra. The study demonstrates that general purpose data analysis tools can effectively be used for the analysis of bacterial mass spectra.

  20. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  1. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  2. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  3. Reliability and availability analysis of a 10 kW@20 K helium refrigerator

    NASA Astrophysics Data System (ADS)

    Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.

    2017-02-01

    A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.

  4. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  5. Application of Kingview and PLC in friction durability test system

    NASA Astrophysics Data System (ADS)

    Gao, Yinhan; Cui, Jing; Yang, Kaiyu; Ke, Hui; Song, Bing

    2013-01-01

    Using PLC and Kingview software, a friction durability test system is designed. The overall program, hardware configuration, software structure and monitoring interface are described in detail. PLC ensures the stability of data acquisition, and the KingView software makes the HMI easy to manipulate. The practical application shows that the proposed system is cheap, economical and highly reliable.

  6. Software Cuts Homebuilding Costs, Increases Energy Efficiency

    NASA Technical Reports Server (NTRS)

    2015-01-01

    To sort out the best combinations of technologies for a crewed mission to Mars, NASA Headquarters awarded grants to MIT's Department of Aeronautics and Astronautics to develop an algorithm-based software tool that highlights the most reliable and cost-effective options. Utilizing the software, Professor Edward Crawley founded Cambridge, Massachussetts-based Ekotrope, which helps homebuilders choose cost- and energy-efficient floor plans and materials.

  7. FabricS: A user-friendly, complete and robust software for particle shape-fabric analysis

    NASA Astrophysics Data System (ADS)

    Moreno Chávez, G.; Castillo Rivera, F.; Sarocchi, D.; Borselli, L.; Rodríguez-Sedano, L. A.

    2018-06-01

    Shape-fabric is a textural parameter related to the spatial arrangement of elongated particles in geological samples. Its usefulness spans a range from sedimentary petrology to igneous and metamorphic petrology. Independently of the process being studied, when a material flows, the elongated particles are oriented with the major axis in the direction of flow. In sedimentary petrology this information has been used for studies of paleo-flow direction of turbidites, the origin of quartz sediments, and locating ignimbrite vents, among others. In addition to flow direction and its polarity, the method enables flow rheology to be inferred. The use of shape-fabric has been limited due to the difficulties of automatically measuring particles and analyzing them with reliable circular statistics programs. This has dampened interest in the method for a long time. Shape-fabric measurement has increased in popularity since the 1980s thanks to the development of new image analysis techniques and circular statistics software. However, the programs currently available are unreliable, old and are incompatible with newer operating systems, or require programming skills. The goal of our work is to develop a user-friendly program, in the MATLAB environment, with a graphical user interface, that can process images and includes editing functions, and thresholds (elongation and size) for selecting a particle population and analyzing it with reliable circular statistics algorithms. Moreover, the method also has to produce rose diagrams, orientation vectors, and a complete series of statistical parameters. All these requirements are met by our new software. In this paper, we briefly explain the methodology from collection of oriented samples in the field to the minimum number of particles needed to obtain reliable fabric data. We obtained the data using specific statistical tests and taking into account the degree of iso-orientation of the samples and the required degree of reliability. The program has been verified by means of several simulations performed using appropriately designed features and by analyzing real samples.

  8. Improving a data-acquisition software system with abstract data type components

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1990-01-01

    Abstract data types and object-oriented design are active research areas in computer science and software engineering. Much of the interest is aimed at new software development. Abstract data type packages developed for a discontinued software project were used to improve a real-time data-acquisition system under maintenance. The result saved effort and contributed to a significant improvement in the performance, maintainability, and reliability of the Goldstone Solar System Radar Data Acquisition System.

  9. Synchronization and fault-masking in redundant real-time systems

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, K. G.; Butler, R. W.

    1983-01-01

    A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration.

  10. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  11. Investigation of an advanced fault tolerant integrated avionics system

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.; Cottrell, D.; Flanders, J.; Javornik, A.; Rusovick, M.

    1986-01-01

    Presented is an advanced, fault-tolerant multiprocessor avionics architecture as could be employed in an advanced rotorcraft such as LHX. The processor structure is designed to interface with existing digital avionics systems and concepts including the Army Digital Avionics System (ADAS) cockpit/display system, navaid and communications suites, integrated sensing suite, and the Advanced Digital Optical Control System (ADOCS). The report defines mission, maintenance and safety-of-flight reliability goals as might be expected for an operational LHX aircraft. Based on use of a modular, compact (16-bit) microprocessor card family, results of a preliminary study examining simplex, dual and standby-sparing architectures is presented. Given the stated constraints, it is shown that the dual architecture is best suited to meet reliability goals with minimum hardware and software overhead. The report presents hardware and software design considerations for realizing the architecture including redundancy management requirements and techniques as well as verification and validation needs and methods.

  12. Storage system software solutions for high-end user needs

    NASA Technical Reports Server (NTRS)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  13. Functional description of the ISIS system

    NASA Technical Reports Server (NTRS)

    Berman, W. J.

    1979-01-01

    Development of software for avionic and aerospace applications (flight software) is influenced by a unique combination of factors which includes: (1) length of the life cycle of each project; (2) necessity for cooperation between the aerospace industry and NASA; (3) the need for flight software that is highly reliable; (4) the increasing complexity and size of flight software; and (5) the high quality of the programmers and the tightening of project budgets. The interactive software invocation system (ISIS) which is described is designed to overcome the problems created by this combination of factors.

  14. A software upgrade method for micro-electronics medical implants.

    PubMed

    Cao, Yang; Hao, Hongwei; Xue, Lin; Li, Luming; Ma, Bozhi

    2006-01-01

    A software upgrade method for micro-electronics medical implants is designed to enhance the devices' function or renew the software if there are some bugs found, the software updating or some memory units disabled. The implants needn't be replaced by operations if the faults can be corrected through reprogramming, which reduces the patients' pain and improves the safety effectively. This paper introduces the software upgrade method using in-application programming (IAP) and emphasizes how to insure the system, especially the implanted part's reliability and stability while upgrading.

  15. From LPF to eLISA: new approach in payload software

    NASA Astrophysics Data System (ADS)

    Gesa, Ll.; Martin, V.; Conchillo, A.; Ortega, J. A.; Mateos, I.; Torrents, A.; Lopez-Zaragoza, J. P.; Rivas, F.; Lloro, I.; Nofrarias, M.; Sopuerta, CF.

    2017-05-01

    eLISA will be the first observatory in space to explore the Gravitational Universe. It will gather revolutionary information about the dark universe. This implies a robust and reliable embedded control software and hardware working together. With the lessons learnt with the LISA Pathfinder payload software as baseline, we will introduce in this short article the key concepts and new approaches that our group is working on in terms of software: multiprocessor, self-modifying-code strategies, 100% hardware and software monitoring, embedded scripting, Time and Space Partition among others.

  16. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  17. Development and Evaluation of a Measure of Library Automation.

    ERIC Educational Resources Information Center

    Pungitore, Verna L.

    1986-01-01

    Construct validity and reliability estimates indicate that study designed to measure utilization of automation in public and academic libraries was successful in tentatively identifying and measuring three subdimensions of level of automation: quality of hardware, method of software development, and number of automation specialists. Questionnaire…

  18. Quantitative Measures for Software Independent Verification and Validation

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    1996-01-01

    As software is maintained or reused, it undergoes an evolution which tends to increase the overall complexity of the code. To understand the effects of this, we brought in statistics experts and leading researchers in software complexity, reliability, and their interrelationships. These experts' project has resulted in our ability to statistically correlate specific code complexity attributes, in orthogonal domains, to errors found over time in the HAL/S flight software which flies in the Space Shuttle. Although only a prototype-tools experiment, the result of this research appears to be extendable to all other NASA software, given appropriate data similar to that logged for the Shuttle onboard software. Our research has demonstrated that a more complete domain coverage can be mathematically demonstrated with the approach we have applied, thereby ensuring full insight into the cause-and-effects relationship between the complexity of a software system and the fault density of that system. By applying the operational profile we can characterize the dynamic effects of software path complexity under this same approach We now have the ability to measure specific attributes which have been statistically demonstrated to correlate to increased error probability, and to know which actions to take, for each complexity domain. Shuttle software verifiers can now monitor the changes in the software complexity, assess the added or decreased risk of software faults in modified code, and determine necessary corrections. The reports, tool documentation, user's guides, and new approach that have resulted from this research effort represent advances in the state of the art of software quality and reliability assurance. Details describing how to apply this technique to other NASA code are contained in this document.

  19. New Results in Software Model Checking and Analysis

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2010-01-01

    This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.

  20. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  1. ScriptingRT: A Software Library for Collecting Response Latencies in Online Studies of Cognition

    PubMed Central

    Schubert, Thomas W.; Murteira, Carla; Collins, Elizabeth C.; Lopes, Diniz

    2013-01-01

    ScriptingRT is a new open source tool to collect response latencies in online studies of human cognition. ScriptingRT studies run as Flash applets in enabled browsers. ScriptingRT provides the building blocks of response latency studies, which are then combined with generic Apache Flex programming. Six studies evaluate the performance of ScriptingRT empirically. Studies 1–3 use specialized hardware to measure variance of response time measurement and stimulus presentation timing. Studies 4–6 implement a Stroop paradigm and run it both online and in the laboratory, comparing ScriptingRT to other response latency software. Altogether, the studies show that Flash programs developed in ScriptingRT show a small lag and an increased variance in response latencies. However, this did not significantly influence measured effects: The Stroop effect was reliably replicated in all studies, and the found effects did not depend on the software used. We conclude that ScriptingRT can be used to test response latency effects online. PMID:23805326

  2. Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles

    NASA Technical Reports Server (NTRS)

    Gamble, Ed

    2012-01-01

    Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses

  3. Models and metrics for software management and engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1988-01-01

    This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.

  4. Evaluating the intra- and interobserver reliability of three-dimensional ultrasound and power Doppler angiography (3D-PDA) for assessment of placental volume and vascularity in the second trimester of pregnancy.

    PubMed

    Jones, Nia W; Raine-Fenning, Nick J; Mousa, Hatem A; Bradley, Eileen; Bugg, George J

    2011-03-01

    Three-dimensional (3-D) power Doppler angiography (3-D-PDA) allows visualisation of Doppler signals within the placenta and their quantification is possible by the generation of vascular indices by the 4-D View software programme. This study aimed to investigate intra- and interobserver reproducibility of 3-D-PDA analysis of stored datasets at varying gestations with the ultimate goal being to develop a tool for predicting placental dysfunction. Women with an uncomplicated, viable singleton pregnancy were scanned at 12, 16 or 20 weeks gestational age groups. 3-D-PDA datasets acquired of the whole placenta were analysed using the VOCAL software processing tool. Each volume was analysed by three observers twice in the A plane. Intra- and interobserver reliability was assessed by intraclass correlation coefficients (ICCs) and Bland Altman plots. At each gestational age group, 20 low risk women were scanned resulting in 60 datasets in total. The ICC demonstrated a high level of measurement reliability at each gestation with intraobserver values >0.90 and interobserver values of >0.6 for the vascular indices. Bland Altman plots also showed high levels of agreement. Systematic bias was seen at 20 weeks in the vascular indices obtained by different observers. This study demonstrates that 3-D-PDA data can be measured reliably by different observers from stored datasets up to 18 weeks gestation. Measurements become less reliable as gestation advances with bias between observers evident at 20 weeks. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  5. Assessment of lumbosacral kyphosis in spondylolisthesis: a computer-assisted reliability study of six measurement techniques

    PubMed Central

    Glavas, Panagiotis; Mac-Thiong, Jean-Marc; Parent, Stefan; de Guise, Jacques A.

    2008-01-01

    Although recognized as an important aspect in the management of spondylolisthesis, there is no consensus on the most reliable and optimal measure of lumbosacral kyphosis (LSK). Using a custom computer software, four raters evaluated 60 standing lateral radiographs of the lumbosacral spine during two sessions at a 1-week interval. The sample size consisted of 20 normal, 20 low and 20 high grade spondylolisthetic subjects. Six parameters were included for analysis: Boxall’s slip angle, Dubousset’s lumbosacral angle (LSA), the Spinal Deformity Study Group’s (SDSG) LSA, dysplastic SDSG LSA, sagittal rotation (SR), kyphotic Cobb angle (k-Cobb). Intra- and inter-rater reliability for all parameters was assessed using intra-class correlation coefficients (ICC). Correlations between parameters and slip percentage were evaluated with Pearson coefficients. The intra-rater ICC’s for all the parameters ranged between 0.81 and 0.97 and the inter-rater ICC’s were between 0.74 and 0.98. All parameters except sagittal rotation showed a medium to large correlation with slip percentage. Dubousset’s LSA and the k-Cobb showed the largest correlations (r = −0.78 and r = −0.50, respectively). SR was associated with the weakest correlation (r = −0.10). All other parameters had medium correlations with percent slip (r = 0.31–0.43). All measurement techniques provided excellent inter- and intra-rater reliability. Dubousset’s LSA showed the strongest correlation with slip grade. This parameter can be used in the clinical setting with PACS software capabilities to assess LSK. A computer-assisted technique is recommended in order to increase the reliability of the measurement of LSK in spondylolisthesis. PMID:19015898

  6. Reliability and Maintainability (RAM) Training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  7. A Study of Alternative Computer Architectures for System Reliability and Software Simplification.

    DTIC Science & Technology

    1981-04-22

    compression. Several known applications of neighborhood processing, such as noise removal, and boundary smoothing, are shown to be special cases of...Processing [21] A small effort was undertaken to implement image array processing at a very low cost. To this end, a standard Qwip Facsimile

  8. Development of KSC program for investigating and generating field failure rates. Volume 1: Summary and overview

    NASA Technical Reports Server (NTRS)

    Bean, E. E.; Bloomquist, C. E.

    1972-01-01

    A summary of the KSC program for investigating the reliability aspects of the ground support activities is presented. An analysis of unsatisfactory condition reports (RC), and the generation of reliability assessment of components based on the URC are discussed along with the design considerations for attaining reliable real time hardware/software configurations.

  9. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  10. Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)

    1997-01-01

    Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.

  11. Robotic Assembly of Truss Structures for Space Systems and Future Research Plans

    NASA Technical Reports Server (NTRS)

    Doggett, William

    2002-01-01

    Many initiatives under study by both the space science and earth science communities require large space systems, i.e. with apertures greater than 15 m or dimensions greater than 20 m. This paper reviews the effort in NASA Langley Research Center's Automated Structural Assembly Laboratory which laid the foundations for robotic construction of these systems. In the Automated Structural Assembly Laboratory reliable autonomous assembly and disassembly of an 8 meter planar structure composed of 102 truss elements covered by 12 panels was demonstrated. The paper reviews the hardware and software design philosophy which led to reliable operation during weeks of near continuous testing. Special attention is given to highlight the features enhancing assembly reliability.

  12. A high order approach to flight software development and testing

    NASA Technical Reports Server (NTRS)

    Steinbacher, J.

    1981-01-01

    The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.

  13. Healthcare software assurance.

    PubMed

    Cooper, Jason G; Pauley, Keith A

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA's software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted.

  14. Healthcare Software Assurance

    PubMed Central

    Cooper, Jason G.; Pauley, Keith A.

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  15. Using video-annotation software to identify interactions in group therapies for schizophrenia: assessing reliability and associations with outcomes.

    PubMed

    Orfanos, Stavros; Akther, Syeda Ferhana; Abdul-Basit, Muhammad; McCabe, Rosemarie; Priebe, Stefan

    2017-02-10

    Research has shown that interactions in group therapies for people with schizophrenia are associated with a reduction in negative symptoms. However, it is unclear which specific interactions in groups are linked with these improvements. The aims of this exploratory study were to i) develop and test the reliability of using video-annotation software to measure interactions in group therapies in schizophrenia and ii) explore the relationship between interactions in group therapies for schizophrenia with clinically relevant changes in negative symptoms. Video-annotation software was used to annotate interactions from participants selected across nine video-recorded out-patient therapy groups (N = 81). Using the Individual Group Member Interpersonal Process Scale, interactions were coded from participants who demonstrated either a clinically significant improvement (N = 9) or no change (N = 8) in negative symptoms at the end of therapy. Interactions were measured from the first and last sessions of attendance (>25 h of therapy). Inter-rater reliability between two independent raters was measured. Binary logistic regression analysis was used to explore the association between the frequency of interactive behaviors and changes in negative symptoms, assessed using the Positive and Negative Syndrome Scale. Of the 1275 statements that were annotated using ELAN, 1191 (93%) had sufficient audio and visual quality to be coded using the Individual Group Member Interpersonal Process Scale. Rater-agreement was high across all interaction categories (>95% average agreement). A higher frequency of self-initiated statements measured in the first session was associated with improvements in negative symptoms. The frequency of questions and giving advice measured in the first session of attendance was associated with improvements in negative symptoms; although this was only a trend. Video-annotation software can be used to reliably identify interactive behaviors in groups for schizophrenia. The results suggest that proactive communicative gestures, as assessed by the video-analysis, predict outcomes. Future research should use this novel method in larger and clinically different samples to explore which aspects of therapy facilitate such proactive communication early on in therapy.

  16. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  17. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    PubMed

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual measurements, while interobserver APAs ranged from 91% to 96% for QMA versus 57% to 63% for digitized manual measurements. The use of QMA software substantially improved the reliability of lumbar intervertebral measurements and the classification of instability based on flexion-extension radiographs.

  18. 75 FR 81157 - Version One Regional Reliability Standard for Transmission Operations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ... processing software should be filed in native applications or print-to-PDF format and not in a scanned format..., Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). \\3\\ NERC designates the version number of a Reliability...

  19. Software Design Improvements. Part 1; Software Benefits and Limitations

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom

    1997-01-01

    Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?

  20. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  1. Reliability and scientific use of a surgical planning software for anterior cervical discectomy and fusion (ACDF).

    PubMed

    Barth, Martin; Weiß, Christel; Brenke, Christopher; Schmieder, Kirsten

    2017-04-01

    Software-based planning of a spinal implant inheres in the promise of precision and superior results. The purpose of the study was to analyze the measurement reliability, prognostic value, and scientific use of a surgical planning software in patients receiving anterior cervical discectomy and fusion (ACDF). Lateral neutral, flexion, and extension radiographs of patients receiving tailored cages as suggested by the planning software were available for analysis. Differences of vertebral wedging angles and segmental height of all cervical segments were determined at different timepoints using intraclass correlation coefficients (ICC). Cervical lordosis (C2/C7), segmental heights, global, and segmental range of motion (ROM) were determined at different timepoints. Clinical and radiological variables were correlated 12 months after surgery. 282 radiographs of 35 patients with a mean age of 53.1 ± 12.0 years were analyzed. Measurement of segmental height was highly accurate with an ICC near to 1, but angle measurements showed low ICC values. Likewise, the ICCs of the prognosticated values were low. Postoperatively, there was a significant decrease of segmental height (p < 0.0001) and loss of C2/C7 ROM (p = 0.036). ROM of unfused segments also significantly decreased (p = 0.016). High NDI was associated with low subsidence rates. The surgical planning software showed high accuracy in the measurement of height differences and lower accuracy values with angle measurements. Both the prognosticated height and angle values were arbitrary. Global ROM, ROM of the fused and intact segments, is restricted after ACDF.

  2. Securing Ground Data System Applications for Space Operations

    NASA Technical Reports Server (NTRS)

    Pajevski, Michael J.; Tso, Kam S.; Johnson, Bryan

    2014-01-01

    The increasing prevalence and sophistication of cyber attacks has prompted the Multimission Ground Systems and Services (MGSS) Program Office at Jet Propulsion Laboratory (JPL) to initiate the Common Access Manager (CAM) effort to protect software applications used in Ground Data Systems (GDSs) at JPL and other NASA Centers. The CAM software provides centralized services and software components used by GDS subsystems to meet access control requirements and ensure data integrity, confidentiality, and availability. In this paper we describe the CAM software; examples of its integration with spacecraft commanding software applications and an information management service; and measurements of its performance and reliability.

  3. Testing of Hand-Held Mine Detection Systems

    DTIC Science & Technology

    2015-01-08

    ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test

  4. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI

  5. Reliability and validity of a novel Kinect-based software program for measuring posture, balance and side-bending.

    PubMed

    Grooten, Wilhelmus Johannes Andreas; Sandberg, Lisa; Ressman, John; Diamantoglou, Nicolas; Johansson, Elin; Rasmussen-Barr, Eva

    2018-01-08

    Clinical examinations are subjective and often show a low validity and reliability. Objective and highly reliable quantitative assessments are available in laboratory settings using 3D motion analysis, but these systems are too expensive to use for simple clinical examinations. Qinematic™ is an interactive movement analyses system based on the Kinect camera and is an easy-to-use clinical measurement system for assessing posture, balance and side-bending. The aim of the study was to test the test-retest the reliability and construct validity of Qinematic™ in a healthy population, and to calculate the minimal clinical differences for the variables of interest. A further aim was to identify the discriminative validity of Qinematic™ in people with low-back pain (LBP). We performed a test-retest reliability study (n = 37) with around 1 week between the occasions, a construct validity study (n = 30) in which Qinematic™ was tested against a 3D motion capture system, and a discriminative validity study, in which a group of people with LBP (n = 20) was compared to healthy controls (n = 17). We tested a large range of psychometric properties of 18 variables in three sections: posture (head and pelvic position, weight distribution), balance (sway area and velocity in single- and double-leg stance), and side-bending. The majority of the variables in the posture and balance sections, showed poor/fair reliability (ICC < 0.4) and poor/fair validity (Spearman <0.4), with significant differences between occasions, between Qinematic™ and the 3D-motion capture system. In the clinical study, Qinematic™ did not differ between people with LPB and healthy for these variables. For one variable, side-bending to the left, there was excellent reliability (ICC =0.898), excellent validity (r = 0.943), and Qinematic™ could differentiate between LPB and healthy individuals (p = 0.012). This paper shows that a novel software program (Qinematic™) based on the Kinect camera for measuring balance, posture and side-bending has poor psychometric properties, indicating that the variables on balance and posture should not be used for monitoring individual changes over time or in research. Future research on the dynamic tasks of Qinematic™ is warranted.

  6. Views of Health Information Management Staff on the Medical Coding Software in Mashhad, Iran.

    PubMed

    Kimiafar, Khalil; Hemmati, Fatemeh; Banaye Yazdipour, Alireza; Sarbaz, Masoumeh

    2018-01-01

    Systematic evaluation of Health Information Technology (HIT) and users' views leads to the modification and development of these technologies in accordance with their needs. The purpose of this study was to investigate the views of Health Information Management (HIM) staff on the quality of medical coding software. A descriptive cross-sectional study was conducted between May to July 2016 in 26 hospitals (academic and non-academic) in Mashhad, north-eastern Iran. The study population consisted of the chairs of HIM departments and medical coders (58 staff). Data were collected through a valid and reliable questionnaire. The data were analyzed using the SPSS version 16.0. From the views of staff, the advantages of coding software such as reducing coding time had the highest average (Mean=3.82) while cost reduction had the lowest average (Mean =3.20), respectively. Meanwhile, concern about losing job opportunities was the least important disadvantage (15.5%) to the use of coding software. In general, the results of this study showed that coding software in some cases have deficiencies. Designers and developers of health information coding software should pay more attention to technical aspects, in-work reminders, help in deciding on proper codes selection by access coding rules, maintenance services, link to other relevant databases and the possibility of providing brief and detailed reports in different formats.

  7. Reliability program requirements for aeronautical and space system contractors

    NASA Technical Reports Server (NTRS)

    1987-01-01

    General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.

  8. Electronic Health Record for Intensive Care based on Usual Windows Based Software.

    PubMed

    Reper, Arnaud; Reper, Pascal

    2015-08-01

    In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed an EHR based on usual software and components. The software was designed as a client-server architecture running on the Windows operating system and powered by the access data base system. The client software was developed using Visual Basic interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. Since his deployment in September 2004, the EHR was used to care more than five thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of basic functionalities communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. The developed system based on usual software components was able to respond to the medical needs of the local ICU environment. The use of Windows for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system.

  9. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  10. Guidance and Control Software Project Data - Volume 1: Planning Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the planning documents from the GCS project. Volume 1 contains five appendices: A. Plan for Software Aspects of Certification for the Guidance and Control Software Project; B. Software Development Standards for the Guidance and Control Software Project; C. Software Verification Plan for the Guidance and Control Software Project; D. Software Configuration Management Plan for the Guidance and Control Software Project; and E. Software Quality Assurance Activities.

  11. [The study of noninvasive ventilator impeller based on ANSYS].

    PubMed

    Hu, Zhaoyan; Lu, Pan; Xie, Haiming; Zhou, Yaxu

    2011-06-01

    An impeller plays a significant role in the non-invasive ventilator. This paper shows a model of impeller for noninvasive ventilator established with the software Solidworks. The model was studied for feasibility based on ANSYS. Then stress and strain of the impeller were discussed under the external loads. The results of the analysis provided verification for the reliable design of impellers.

  12. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  13. Tools Ensure Reliability of Critical Software

    NASA Technical Reports Server (NTRS)

    2012-01-01

    In November 2006, after attempting to make a routine maneuver, NASA's Mars Global Surveyor (MGS) reported unexpected errors. The onboard software switched to backup resources, and a 2-day lapse in communication took place between the spacecraft and Earth. When a signal was finally received, it indicated that MGS had entered safe mode, a state of restricted activity in which the computer awaits instructions from Earth. After more than 9 years of successful operation gathering data and snapping pictures of Mars to characterize the planet's land and weather communication between MGS and Earth suddenly stopped. Months later, a report from NASA's internal review board found the spacecraft's battery failed due to an unfortunate sequence of events. Updates to the spacecraft's software, which had taken place months earlier, were written to the wrong memory address in the spacecraft's computer. In short, the mission ended because of a software defect. Over the last decade, spacecraft have become increasingly reliant on software to carry out mission operations. In fact, the next mission to Mars, the Mars Science Laboratory, will rely on more software than all earlier missions to Mars combined. According to Gerard Holzmann, manager at the Laboratory for Reliable Software (LaRS) at NASA's Jet Propulsion Laboratory (JPL), even the fault protection systems on a spacecraft are mostly software-based. For reasons like these, well-functioning software is critical for NASA. In the same year as the failure of MGS, Holzmann presented a new approach to critical software development to help reduce risk and provide consistency. He proposed The Power of 10: Rules for Developing Safety-Critical Code, which is a small set of rules that can easily be remembered, clearly relate to risk, and allow compliance to be verified. The reaction at JPL was positive, and developers in the private sector embraced Holzmann's ideas.

  14. Data collection and evaluation for experimental computer science research

    NASA Technical Reports Server (NTRS)

    Zelkowitz, Marvin V.

    1983-01-01

    The Software Engineering Laboratory was monitoring software development at NASA Goddard Space Flight Center since 1976. The data collection activities of the Laboratory and some of the difficulties of obtaining reliable data are described. In addition, the application of this data collection process to a current prototyping experiment is reviewed.

  15. Software metrics: The quantitative impact of four factors on work rates experienced during software development. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Judge, R. W.

    1981-01-01

    A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.

  16. Automated verification of flight software. User's manual

    NASA Technical Reports Server (NTRS)

    Saib, S. H.

    1982-01-01

    (Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.

  17. Establishing cephalometric landmarks for the translational study of Le Fort-based facial transplantation in Swine: enhanced applications using computer-assisted surgery and custom cutting guides.

    PubMed

    Santiago, Gabriel F; Susarla, Srinivas M; Al Rakan, Mohammed; Coon, Devin; Rada, Erin M; Sarhane, Karim A; Shores, Jamie T; Bonawitz, Steven C; Cooney, Damon; Sacks, Justin; Murphy, Ryan J; Fishman, Elliot K; Brandacher, Gerald; Lee, W P Andrew; Liacouras, Peter; Grant, Gerald; Armand, Mehran; Gordon, Chad R

    2014-05-01

    Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.

  18. Instrumented Static and Dynamic Balance Assessment after Stroke Using Wii Balance Boards: Reliability and Association with Clinical Tests

    PubMed Central

    Bower, Kelly J.; McGinley, Jennifer L.; Miller, Kimberly J.; Clark, Ross A.

    2014-01-01

    Background and Objectives The Wii Balance Board (WBB) is a globally accessible device that shows promise as a clinically useful balance assessment tool. Although the WBB has been found to be comparable to a laboratory-grade force platform for obtaining centre of pressure data, it has not been comprehensively studied in clinical populations. The aim of this study was to investigate the measurement properties of tests utilising the WBB in people after stroke. Methods Thirty individuals who were more than three months post-stroke and able to stand unsupported were recruited from a single outpatient rehabilitation facility. Participants performed standardised assessments incorporating the WBB and customised software (static stance with eyes open and closed, static weight-bearing asymmetry, dynamic mediolateral weight shifting and dynamic sit-to-stand) in addition to commonly employed clinical tests (10 Metre Walk Test, Timed Up and Go, Step Test and Functional Reach) on two testing occasions one week apart. Test-retest reliability and construct validity of the WBB tests were investigated. Results All WBB-based outcomes were found to be highly reliable between testing occasions (ICC  = 0.82 to 0.98). Correlations were poor to moderate between WBB variables and clinical tests, with the strongest associations observed between task-related activities, such as WBB mediolateral weight shifting and the Step Test. Conclusions The WBB, used with customised software, is a reliable and potentially useful tool for the assessment of balance and weight-bearing asymmetry following stroke. Future research is recommended to further investigate validity and responsiveness. PMID:25541939

  19. Instrumented static and dynamic balance assessment after stroke using Wii Balance Boards: reliability and association with clinical tests.

    PubMed

    Bower, Kelly J; McGinley, Jennifer L; Miller, Kimberly J; Clark, Ross A

    2014-01-01

    The Wii Balance Board (WBB) is a globally accessible device that shows promise as a clinically useful balance assessment tool. Although the WBB has been found to be comparable to a laboratory-grade force platform for obtaining centre of pressure data, it has not been comprehensively studied in clinical populations. The aim of this study was to investigate the measurement properties of tests utilising the WBB in people after stroke. Thirty individuals who were more than three months post-stroke and able to stand unsupported were recruited from a single outpatient rehabilitation facility. Participants performed standardised assessments incorporating the WBB and customised software (static stance with eyes open and closed, static weight-bearing asymmetry, dynamic mediolateral weight shifting and dynamic sit-to-stand) in addition to commonly employed clinical tests (10 Metre Walk Test, Timed Up and Go, Step Test and Functional Reach) on two testing occasions one week apart. Test-retest reliability and construct validity of the WBB tests were investigated. All WBB-based outcomes were found to be highly reliable between testing occasions (ICC  = 0.82 to 0.98). Correlations were poor to moderate between WBB variables and clinical tests, with the strongest associations observed between task-related activities, such as WBB mediolateral weight shifting and the Step Test. The WBB, used with customised software, is a reliable and potentially useful tool for the assessment of balance and weight-bearing asymmetry following stroke. Future research is recommended to further investigate validity and responsiveness.

  20. Optimum Component Design in N-Stage Series Systems to Maximize the Reliability Under Budget Constraint

    DTIC Science & Technology

    2003-03-01

    27 2.8.5 Marginal Analysis Method...Figure 11 Improved Configuration of Figure 10; Increases Basic System Reliability..... 26 Figure 12 Example of marginal analysis ...View of Main Book of Software ............................................................... 51 Figure 20 The View of Data Worksheet

  1. Applicability of SREM to the Verification of Management Information System Software Requirements. Volume I.

    DTIC Science & Technology

    1981-04-30

    However, SREM was not designed to harmonize these kinds of problems. Rather, it is a tool to investigate the logic of the processing specified in the... design . Supoorting programs were also conducted to perform basic research into such areas as software reliability, static and dynamic validation techniques...development. 0 Maintain requirements development independent of the target machine and the eventual software design . 0. Allow for easy response to

  2. An Open Avionics and Software Architecture to Support Future NASA Exploration Missions

    NASA Technical Reports Server (NTRS)

    Schlesinger, Adam

    2017-01-01

    The presentation describes an avionics and software architecture that has been developed through NASAs Advanced Exploration Systems (AES) division. The architecture is open-source, highly reliable with fault tolerance, and utilizes standard capabilities and interfaces, which are scalable and customizable to support future exploration missions. Specific focus areas of discussion will include command and data handling, software, human interfaces, communication and wireless systems, and systems engineering and integration.

  3. Component Verification and Certification in NASA Missions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Penix, John; Norvig, Peter (Technical Monitor)

    2001-01-01

    Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.

  4. Using image J to document healing in ulcers of the foot in diabetes.

    PubMed

    Jeffcoate, William J; Musgrove, Alison J; Lincoln, Nadina B

    2017-12-01

    The aim of the study was to assess the reliability of measuring the cross-sectional area of diabetic foot ulcers using Image J software. The inter- and intra-rater reliability of ulcer area measures were assessed using digital images of acetate tracings of ulcers of the foot affecting 31 participants in an off-loading randomised trial. Observations were made independently by five specialist podiatrists, one of whom was experienced in the use of Image J software and educated the other four in a single session. The mean (±SD) of the mean cross-sectional areas of the 31 ulcers determined independently by the five observers was 1386·7 (±22·7) mm 2 . The correlation between all pairs of observers was >0·99 (P < 0·001). There was no significant difference overall between the five observers (ANOVA F1.538; P = 0·165) and no difference between any two (paired samples test t = -0·787-1·396; P ≥ 0·088). The correlation between the areas determined by two observers on two occasions separated by not less than 1 week was very high (0·997 and 0·999; P < 0·001 and <0·001, respectively). The inter- and intra-reliability of the Image J software is very high, with no evidence of a difference either between or within observers. This technique should be considered for both research and clinical use in order to document changes in ulcer area. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  5. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less

  6. Frame rate required for speckle tracking echocardiography: A quantitative clinical study with open-source, vendor-independent software.

    PubMed

    Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P

    2016-09-01

    To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. The stability and validity of automated vocal analysis in preverbal preschoolers with autism spectrum disorder.

    PubMed

    Woynaroski, Tiffany; Oller, D Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-03-01

    Theory and research suggest that vocal development predicts "useful speech" in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently "in development" and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. Autism Res 2017, 10: 508-519. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Reliability of simulated robustness testing in fast liquid chromatography, using state-of-the-art column technology, instrumentation and modelling software.

    PubMed

    Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs

    2014-02-01

    The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Comparison of two methods for cardiac output measurement in critically ill patients.

    PubMed

    Saraceni, E; Rossi, S; Persona, P; Dan, M; Rizzi, S; Meroni, M; Ori, C

    2011-05-01

    The aim of recent haemodynamic monitoring has been to obtain continuous and reliable measures of cardiac output (CO) and indices of preload responsiveness. Many of these methods are based on the arterial pressure waveform analysis. The aim of our study was to assess the accuracy of CO measurements obtained by FloTrac/Vigileo, software version 1.07 and the new version 1.10 (Edwards Lifesciences LLC, Irvine, CA, USA), compared with CO measurements obtained by bolus thermodilution by pulmonary artery catheterization (PAC) in the intensive care setting. In 21 critically ill patients (enrolled in two University Hospitals), requiring invasive haemodynamic monitoring, PAC and FloTrac/Vigileo transducers connected to the arterial pressure line were placed. Simultaneous measurements of CO by two methods (FloTrac/Vigileo and thermodilution) were obtained three times a day for 3 consecutive days, when possible. The level of concordance between the two methods was assessed by the procedure suggested by Bland and Altman. One hundred and forty-one pairs of measurements (provided by thermodilution and by both 1.07 and 1.10 FloTrac/Vigileo versions) were obtained in 21 patients (seven of them were trauma patients) with a mean (sd) age of 59 (16) yr. The Pearson product moment coefficient was 0.62 (P<0.001). The bias was -0.18 litre min(-1). The limits of agreement were 4.54 and -4.90 litre min(-1), respectively. Our data show a poor level of concordance between measures provided by the two methods. We found an underestimation of CO values measured with the 1.07 software version of FloTrac for supranormal values of CO. The new software (1.10) has been improved in order to correct this bias; however, its reliability is still poor. On the basis of our data, we can therefore conclude that both software versions of FloTrac/Vigileo did not still provide reliable estimation of CO in our intensive care unit setting.

  10. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1984-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in ADA so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. The primary activities are: (1) Continued development and testing of our fault-tolerant Ada testbed; (2) consideration of desirable language changes to allow Ada to provide useful semantics for failure; (3) analysis of the inadequacies of existing software fault tolerance strategies.

  11. Automatic Certification of Kalman Filters for Reliable Code Generation

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian

    2005-01-01

    AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.

  12. A six-legged rover for planetary exploration

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Krotkov, Eric; Bares, John

    1991-01-01

    To survive the rigors and isolation of planetary exploration, an autonomous rover must be competent, reliable, and efficient. This paper presents the Ambler, a six-legged robot featuring orthogonal legs and a novel circulating gait, which has been designed for traversal of rugged, unknown environments. An autonomous software system that integrates perception, planning, and real-time control has been developed to walk the Ambler through obstacle strewn terrain. The paper describes the information and control flow of the walking system, and how the design of the mechanism and software combine to achieve competent walking, reliable behavior in the face of unexpected failures, and efficient utilization of time and power.

  13. Enhancing E-Health Information Systems with Agent Technology

    PubMed Central

    Nguyen, Minh Tuan; Fuhrer, Patrik; Pasquier-Rocha, Jacques

    2009-01-01

    Agent Technology is an emerging and promising research area in software technology, which increasingly contributes to the development of value-added information systems for large healthcare organizations. Through the MediMAS prototype, resulting from a case study conducted at a local Swiss hospital, this paper aims at presenting the advantages of reinforcing such a complex E-health man-machine information organization with software agents. The latter will work on behalf of human agents, taking care of routine tasks, and thus increasing the speed, the systematic, and ultimately the reliability of the information exchanges. We further claim that the modeling of the software agent layer can be methodically derived from the actual “classical” laboratory organization and practices, as well as seamlessly integrated with the existing information system. PMID:19096509

  14. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  15. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.

    PubMed

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.

  16. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance

    PubMed Central

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735

  17. Packaging Software Assets for Reuse

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Marshall, J. J.; Downs, R. R.

    2010-12-01

    The reuse of existing software assets such as code, architecture, libraries, and modules in current software and systems development projects can provide many benefits, including reduced costs, in time and effort, and increased reliability. Many reusable assets are currently available in various online catalogs and repositories, usually broken down by disciplines such as programming language (Ibiblio for Maven/Java developers, PyPI for Python developers, CPAN for Perl developers, etc.). The way these assets are packaged for distribution can play a role in their reuse - an asset that is packaged simply and logically is typically easier to understand, install, and use, thereby increasing its reusability. A well-packaged asset has advantages in being more reusable and thus more likely to provide benefits through its reuse. This presentation will discuss various aspects of software asset packaging and how they can affect the reusability of the assets. The characteristics of well-packaged software will be described. A software packaging domain model will be introduced, and some existing packaging approaches examined. An example case study of a Reuse Enablement System (RES), currently being created by near-term Earth science decadal survey missions, will provide information about the use of the domain model. Awareness of these factors will help software developers package their reusable assets so that they can provide the most benefits for software reuse.

  18. Software Architecture of Sensor Data Distribution In Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo

    2006-01-01

    Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.

  19. Multiple reaction monitoring-ion pair finder: a systematic approach to transform nontargeted mode to pseudotargeted mode for metabolomics study based on liquid chromatography-mass spectrometry.

    PubMed

    Luo, Ping; Dai, Weidong; Yin, Peiyuan; Zeng, Zhongda; Kong, Hongwei; Zhou, Lina; Wang, Xiaolin; Chen, Shili; Lu, Xin; Xu, Guowang

    2015-01-01

    Pseudotargeted metabolic profiling is a novel strategy combining the advantages of both targeted and untargeted methods. The strategy obtains metabolites and their product ions from quadrupole time-of-flight (Q-TOF) MS by information-dependent acquisition (IDA) and then picks targeted ion pairs and measures them on a triple-quadrupole MS by multiple reaction monitoring (MRM). The picking of ion pairs from thousands of candidates is the most time-consuming step of the pseudotargeted strategy. Herein, a systematic and automated approach and software (MRM-Ion Pair Finder) were developed to acquire characteristic MRM ion pairs by precursor ions alignment, MS(2) spectrum extraction and reduction, characteristic product ion selection, and ion fusion. To test the reliability of the approach, a mixture of 15 metabolite standards was first analyzed; the representative ion pairs were correctly picked out. Then, pooled serum samples were further studied, and the results were confirmed by the manual selection. Finally, a comparison with a commercial peak alignment software was performed, and a good characteristic ion coverage of metabolites was obtained. As a proof of concept, the proposed approach was applied to a metabolomics study of liver cancer; 854 metabolite ion pairs were defined in the positive ion mode from serum. Our approach provides a high throughput method which is reliable to acquire MRM ion pairs for pseudotargeted metabolomics with improved metabolite coverage and facilitate more reliable biomarkers discoveries.

  20. Numerical aerodynamic simulation facility feasibility study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year. Facets of the work described include the hardware configuration, software, user language, and fault tolerance.

  1. Open source electronic health record and patient data management system for intensive care.

    PubMed

    Massaut, Jacques; Reper, Pascal

    2008-01-01

    In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed a PDMS and EHR based on open source software and components. The software was designed as a client-server architecture running on the Linux operating system and powered by the PostgreSQL data base system. The client software was developed in C using GTK interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. Since his deployment in February 2004, the PDMS was used to care more than three thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of the Mirth HL7 communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. The developed system based on open source software components was able to respond to the medical needs of the local ICU environment. The use of OSS for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system.

  2. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    PubMed

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  4. beta-Aminoalcohols as Potential Reactivators of Aged Sarin-/Soman-Inhibited Acetylcholinesterase

    DTIC Science & Technology

    2017-02-08

    This approach includes high - quality quantum mechanical/molecular mechanical calcula- tions, providing reliable reactivation steps and energetics...I. V. Khavrutskii Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced...b] Dr. A. Wallqvist Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced

  5. CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 11

    DTIC Science & Technology

    2006-11-01

    8>. 7. Wallace, Delores R. Practical Soft- ware Reliability Modeling. Proc. of the 26th Annual NASA Goddard Software Engineering Workshop, Nov. 2001...STAR WARS TO STAR TREK To Request Back Issues on Topics Not Listed Above, Please Contact <stsc. customerservice@hill.af.mil>. About the Authors Kym

  6. Library Automation Alternatives in 1996 and User Satisfaction Ratings of Library Users by Operating System.

    ERIC Educational Resources Information Center

    Cibbarelli, Pamela

    1996-01-01

    Examines library automation product introductions and conversions to new operating systems. Compares user satisfaction ratings of the following library software packages: DOS/Windows, UNIX, Macintosh, and DEC VAX/VMS. Software is rated according to documentation, service/support, training, product reliability, product capabilities, ease of use,…

  7. Development of computer tablet software for clinical quantification of lateral knee compartment translation during the pivot shift test.

    PubMed

    Muller, Bart; Hofbauer, Marcus; Rahnemai-Azar, Amir Ata; Wolf, Megan; Araki, Daisuke; Hoshino, Yuichi; Araujo, Paulo; Debski, Richard E; Irrgang, James J; Fu, Freddie H; Musahl, Volker

    2016-01-01

    The pivot shift test is a commonly used clinical examination by orthopedic surgeons to evaluate knee function following injury. However, the test can only be graded subjectively by the examiner. Therefore, the purpose of this study is to develop software for a computer tablet to quantify anterior translation of the lateral knee compartment during the pivot shift test. Based on the simple image analysis method, software for a computer tablet was developed with the following primary design constraint - the software should be easy to use in a clinical setting and it should not slow down an outpatient visit. Translation of the lateral compartment of the intact knee was 2.0 ± 0.2 mm and for the anterior cruciate ligament-deficient knee was 8.9 ± 0.9 mm (p < 0.001). Intra-tester (ICC range = 0.913 to 0.999) and inter-tester (ICC = 0.949) reliability were excellent for the repeatability assessments. Overall, the average percent error of measuring simulated translation of the lateral knee compartment with the tablet parallel to the monitor increased from 2.8% at 50 cm distance to 7.7% at 200 cm. Deviation from the parallel position of the tablet did not have a significant effect until a tablet angle of 45°. Average percent error during anterior translation of the lateral knee compartment of 6mm was 2.2% compared to 6.2% for 2 mm of translation. The software provides reliable, objective, and quantitative data on translation of the lateral knee compartment during the pivot shift test and meets the design constraints posed by the clinical setting.

  8. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  9. Engineering and Software Engineering

    NASA Astrophysics Data System (ADS)

    Jackson, Michael

    The phrase ‘software engineering' has many meanings. One central meaning is the reliable development of dependable computer-based systems, especially those for critical applications. This is not a solved problem. Failures in software development have played a large part in many fatalities and in huge economic losses. While some of these failures may be attributable to programming errors in the narrowest sense—a program's failure to satisfy a given formal specification—there is good reason to think that most of them have other roots. These roots are located in the problem of software engineering rather than in the problem of program correctness. The famous 1968 conference was motivated by the belief that software development should be based on “the types of theoretical foundations and practical disciplines that are traditional in the established branches of engineering.” Yet after forty years of currency the phrase ‘software engineering' still denotes no more than a vague and largely unfulfilled aspiration. Two major causes of this disappointment are immediately clear. First, too many areas of software development are inadequately specialised, and consequently have not developed the repertoires of normal designs that are the indispensable basis of reliable engineering success. Second, the relationship between structural design and formal analytical techniques for software has rarely been one of fruitful synergy: too often it has defined a boundary between competing dogmas, at which mutual distrust and incomprehension deprive both sides of advantages that should be within their grasp. This paper discusses these causes and their effects. Whether the common practice of software development will eventually satisfy the broad aspiration of 1968 is hard to predict; but an understanding of past failure is surely a prerequisite of future success.

  10. Guidance and Control Software Project Data - Volume 3: Verification Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.

  11. Development of a New VLBI Data Analysis Software

    NASA Technical Reports Server (NTRS)

    Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.

    2010-01-01

    We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.

  12. Integrated Software Health Management for Aircraft GN and C

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mengshoel, Ole

    2011-01-01

    Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.

  13. Nurturing reliable and robust open-source scientific software

    NASA Astrophysics Data System (ADS)

    Uieda, L.; Wessel, P.

    2017-12-01

    Scientific results are increasingly the product of software. The reproducibility and validity of published results cannot be ensured without access to the source code of the software used to produce them. Therefore, the code itself is a fundamental part of the methodology and must be published along with the results. With such a reliance on software, it is troubling that most scientists do not receive formal training in software development. Tools such as version control, continuous integration, and automated testing are routinely used in industry to ensure the correctness and robustness of software. However, many scientist do not even know of their existence (although efforts like Software Carpentry are having an impact on this issue; software-carpentry.org). Publishing the source code is only the first step in creating an open-source project. For a project to grow it must provide documentation, participation guidelines, and a welcoming environment for new contributors. Expanding the project community is often more challenging than the technical aspects of software development. Maintainers must invest time to enforce the rules of the project and to onboard new members, which can be difficult to justify in the context of the "publish or perish" mentality. This problem will continue as long as software contributions are not recognized as valid scholarship by hiring and tenure committees. Furthermore, there are still unsolved problems in providing attribution for software contributions. Many journals and metrics of academic productivity do not recognize citations to sources other than traditional publications. Thus, some authors choose to publish an article about the software and use it as a citation marker. One issue with this approach is that updating the reference to include new contributors involves writing and publishing a new article. A better approach would be to cite a permanent archive of individual versions of the source code in services such as Zenodo (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.

  14. Autonomous robot software development using simple software components

    NASA Astrophysics Data System (ADS)

    Burke, Thomas M.; Chung, Chan-Jin

    2004-10-01

    Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.

  15. Contribution of Electronic Medical Records to the Management of Rare Diseases.

    PubMed

    Bremond-Gignac, Dominique; Lewandowski, Elisabeth; Copin, Henri

    2015-01-01

    Electronic health record systems provide great opportunity to study most diseases. Objective of this study was to determine whether electronic medical records (EMR) in ophthalmology contribute to management of rare eye diseases, isolated or in syndromes. Study was designed to identify and collect patients' data with ophthalmology-specific EMR. Ophthalmology-specific EMR software (Softalmo software Corilus) was used to acquire ophthalmological ocular consultation data from patients with five rare eye diseases. The rare eye diseases and data were selected and collected regarding expertise of eye center. A total of 135,206 outpatient consultations were performed between 2011 and 2014 in our medical center specialized in rare eye diseases. The search software identified 29 congenital aniridia, 6 Axenfeld/Rieger syndrome, 11 BEPS, 3 Nanophthalmos, and 3 Rubinstein-Taybi syndrome. EMR provides advantages for medical care. The use of ophthalmology-specific EMR is reliable and can contribute to a comprehensive ocular visual phenotype useful for clinical research. Routinely EMR acquired with specific software dedicated to ophthalmology provides sufficient detail for rare diseases. These software-collected data appear useful for creating patient cohorts and recording ocular examination, avoiding the time-consuming analysis of paper records and investigation, in a University Hospital linked to a National Reference Rare Center Disease.

  16. Contribution of Electronic Medical Records to the Management of Rare Diseases

    PubMed Central

    Bremond-Gignac, Dominique; Lewandowski, Elisabeth; Copin, Henri

    2015-01-01

    Purpose. Electronic health record systems provide great opportunity to study most diseases. Objective of this study was to determine whether electronic medical records (EMR) in ophthalmology contribute to management of rare eye diseases, isolated or in syndromes. Study was designed to identify and collect patients' data with ophthalmology-specific EMR. Methods. Ophthalmology-specific EMR software (Softalmo software Corilus) was used to acquire ophthalmological ocular consultation data from patients with five rare eye diseases. The rare eye diseases and data were selected and collected regarding expertise of eye center. Results. A total of 135,206 outpatient consultations were performed between 2011 and 2014 in our medical center specialized in rare eye diseases. The search software identified 29 congenital aniridia, 6 Axenfeld/Rieger syndrome, 11 BEPS, 3 Nanophthalmos, and 3 Rubinstein-Taybi syndrome. Discussion. EMR provides advantages for medical care. The use of ophthalmology-specific EMR is reliable and can contribute to a comprehensive ocular visual phenotype useful for clinical research. Conclusion. Routinely EMR acquired with specific software dedicated to ophthalmology provides sufficient detail for rare diseases. These software-collected data appear useful for creating patient cohorts and recording ocular examination, avoiding the time-consuming analysis of paper records and investigation, in a University Hospital linked to a National Reference Rare Center Disease. PMID:26539543

  17. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigted. A distributed system, programmed entirely in Ada, was studied to assess the use of individual tasks without concern for the processor used. Continued development and testing of the fault tolerant Ada testbed; development of suggested changes to Ada to cope with the failures of interest; design of approaches to fault tolerant software in real time systems, and the integration of these ideas into Ada; and the preparation of various papers and presentations were discussed.

  18. Modular preoperative planning software for computer-aided oral implantology and the application of a novel stereolithographic template: a pilot study.

    PubMed

    Chen, Xiaojun; Yuan, Jianbing; Wang, Chengtao; Huang, Yuanliang; Kang, Lu

    2010-09-01

    In the field of oral implantology, there is a trend toward computer-aided implant surgery, especially the application of computerized tomography (CT)-derived surgical templates. However, because of relatively unsatisfactory match between the templates and receptor sites, conventional surgical templates may not be accurate enough for the severely resorbed edentulous cases during the procedure of transferring the preoperative plan to the actual surgery. The purpose of this study is to introduce a novel bone-tooth-combined-supported surgical guide, which is designed by utilizing a special modular software and fabricated via stereolithography technique using both laser scanning and CT imaging, thus improving the fit accuracy and reliability. A modular preoperative planning software was developed for computer-aided oral implantology. With the introduction of dynamic link libraries and some well-known free, open-source software libraries such as Visualization Toolkit (Kitware, Inc., New York, USA) and Insight Toolkit (Kitware, Inc.) a plug-in evolutive software architecture was established, allowing for expandability, accessibility, and maintainability in our system. To provide a link between the preoperative plan and the actual surgery, a novel bone-tooth-combined-supported surgical template was fabricated, utilizing laser scanning, image registration, and rapid prototyping. Clinical studies were conducted on four partially edentulous cases to make a comparison with the conventional bone-supported templates. The fixation was more stable than tooth-supported templates because laser scanning technology obtained detailed dentition information, which brought about the unique topography between the match surface of the templates and the adjacent teeth. The average distance deviations at the coronal and apical point of the implant were 0.66 mm (range: 0.3-1.2) and 0.86 mm (range: 0.4-1.2), and the average angle deviation was 1.84 degrees (range: 0.6-2.8 degrees ). This pilot study proves that the novel combined-supported templates are superior to the conventional ones. However, more clinical cases will be conducted to demonstrate their feasibility and reliability.

  19. Guidance and Control Software Project Data - Volume 4: Configuration Management and Quality Assurance Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.

  20. Glioblastoma Segmentation: Comparison of Three Different Software Packages.

    PubMed

    Fyllingen, Even Hovig; Stensjøen, Anne Line; Berntsen, Erik Magnus; Solheim, Ole; Reinertsen, Ingerid

    2016-01-01

    To facilitate a more widespread use of volumetric tumor segmentation in clinical studies, there is an urgent need for reliable, user-friendly segmentation software. The aim of this study was therefore to compare three different software packages for semi-automatic brain tumor segmentation of glioblastoma; namely BrainVoyagerTM QX, ITK-Snap and 3D Slicer, and to make data available for future reference. Pre-operative, contrast enhanced T1-weighted 1.5 or 3 Tesla Magnetic Resonance Imaging (MRI) scans were obtained in 20 consecutive patients who underwent surgery for glioblastoma. MRI scans were segmented twice in each software package by two investigators. Intra-rater, inter-rater and between-software agreement was compared by using differences of means with 95% limits of agreement (LoA), Dice's similarity coefficients (DSC) and Hausdorff distance (HD). Time expenditure of segmentations was measured using a stopwatch. Eighteen tumors were included in the analyses. Inter-rater agreement was highest for BrainVoyager with difference of means of 0.19 mL and 95% LoA from -2.42 mL to 2.81 mL. Between-software agreement and 95% LoA were very similar for the different software packages. Intra-rater, inter-rater and between-software DSC were ≥ 0.93 in all analyses. Time expenditure was approximately 41 min per segmentation in BrainVoyager, and 18 min per segmentation in both 3D Slicer and ITK-Snap. Our main findings were that there is a high agreement within and between the software packages in terms of small intra-rater, inter-rater and between-software differences of means and high Dice's similarity coefficients. Time expenditure was highest for BrainVoyager, but all software packages were relatively time-consuming, which may limit usability in an everyday clinical setting.

  1. Usability Evaluation of Air Warfare Assessment & Review Toolset in Exercise Black Skies 2012

    DTIC Science & Technology

    2013-12-01

    is, it allows the user to do what they want to do with it ( Pressman , 2005). This concept is sometimes called fitness for purpose (Nielsen, 1993...Other characteristics of good software defined by Pressman (2005) are: reliability – the proportion of time the software is available for its intended...Diego, CA: Academic Press,. Pressman , R. S. (2005). Software Engineering: A Practitioner’s Approach. New York: McGraw- Hill. Symons, S., France, M

  2. Modification Site Localization in Peptides.

    PubMed

    Chalkley, Robert J

    2016-01-01

    There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.

  3. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  4. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  5. Guidance and Control Software Project Data - Volume 2: Development Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the development documents from the GCS project. Volume 2 contains three appendices: A. Guidance and Control Software Development Specification; B. Design Description for the Pluto Implementation of the Guidance and Control Software; and C. Source Code for the Pluto Implementation of the Guidance and Control Software

  6. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  7. A guide to onboard checkout. Volume 1: Guidance, navigation and control

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The results are presented of a study of onboard checkout techniques, as they relate to space station subsystems, as a guide to those who may need to implement onboard checkout in similar subsystems. Guidance, navigation, and control subsystems, and their reliability and failure analyses are presented. Software and testing procedures are also given.

  8. Effective organizational solutions for implementation of DBMS software packages

    NASA Technical Reports Server (NTRS)

    Jones, D.

    1984-01-01

    The space telescope management information system development effort is a guideline for discussing effective organizational solutions used in implementing DBMS software. Focus is on the importance of strategic planning. The value of constructing an information system architecture to conform to the organization's managerial needs, the need for a senior decision maker, dealing with shifting user requirements, and the establishment of a reliable working relationship with the DBMS vendor are examined. Requirements for a schedule to demonstrate progress against a defined timeline and the importance of continued monitoring for production software control, production data control, and software enhancements are also discussed.

  9. Towards Certification of a Space System Application of Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Markosian, Lawrence Z.

    2008-01-01

    Advanced fault detection, isolation and recovery (FDIR) software is being investigated at NASA as a means to the improve reliability and availability of its space systems. Certification is a critical step in the acceptance of such software. Its attainment hinges on performing the necessary verification and validation to show that the software will fulfill its requirements in the intended setting. Presented herein is our ongoing work to plan for the certification of a pilot application of advanced FDIR software in a NASA setting. We describe the application, and the key challenges and opportunities it offers for certification.

  10. The Legacy of Space Shuttle Flight Software

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  11. Building quality into medical product software design.

    PubMed

    Mallory, S R

    1993-01-01

    The software engineering and quality assurance disciplines are a requisite to the design of safe and effective software-based medical devices. It is in the areas of software methodology and process that the most beneficial application of these disciplines to software development can be made. Software is a product of complex operations and methodologies and is not amenable to the traditional electromechanical quality assurance processes. Software quality must be built in by the developers, with the software verification and validation engineers acting as the independent instruments for ensuring compliance with performance objectives and with development and maintenance standards. The implementation of a software quality assurance program is a complex process involving management support, organizational changes, and new skill sets, but the benefits are profound. Its rewards provide safe, reliable, cost-effective, maintainable, and manageable software, which may significantly speed the regulatory review process and therefore potentially shorten the overall time to market. The use of a trial project can greatly facilitate the learning process associated with the first-time application of a software quality assurance program.

  12. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  13. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  14. Open-source point-of-care electronic medical records for use in resource-limited settings: systematic review and questionnaire surveys

    PubMed Central

    Bru, Juan; Berger, Christopher A

    2012-01-01

    Background Point-of-care electronic medical records (EMRs) are a key tool to manage chronic illness. Several EMRs have been developed for use in treating HIV and tuberculosis, but their applicability to primary care, technical requirements and clinical functionalities are largely unknown. Objectives This study aimed to address the needs of clinicians from resource-limited settings without reliable internet access who are considering adopting an open-source EMR. Study eligibility criteria Open-source point-of-care EMRs suitable for use in areas without reliable internet access. Study appraisal and synthesis methods The authors conducted a comprehensive search of all open-source EMRs suitable for sites without reliable internet access. The authors surveyed clinician users and technical implementers from a single site and technical developers of each software product. The authors evaluated availability, cost and technical requirements. Results The hardware and software for all six systems is easily available, but they vary considerably in proprietary components, installation requirements and customisability. Limitations This study relied solely on self-report from informants who developed and who actively use the included products. Conclusions and implications of key findings Clinical functionalities vary greatly among the systems, and none of the systems yet meet minimum requirements for effective implementation in a primary care resource-limited setting. The safe prescribing of medications is a particular concern with current tools. The dearth of fully functional EMR systems indicates a need for a greater emphasis by global funding agencies to move beyond disease-specific EMR systems and develop a universal open-source health informatics platform. PMID:22763661

  15. 75 FR 14386 - Interpretation of Transmission Planning Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... created electronically using word processing software should be filed in native applications or print-to.... FERC, 564 F.3d 1342 (DC Cir. 2009). \\6\\ Mandatory Reliability Standards for the Bulk-Power System... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...

  16. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  17. Composing, Analyzing and Validating Software Models

    NASA Astrophysics Data System (ADS)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  18. Composing, Analyzing and Validating Software Models

    NASA Technical Reports Server (NTRS)

    Sheldon, Frederick T.

    1998-01-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  19. The development procedures and tools applied for the attitude control software of the Italian satellite SAX

    NASA Astrophysics Data System (ADS)

    Hameetman, G. J.; Dekker, G. J.

    1993-11-01

    The Italian satellite (with a large Dutch contribution) SAX is a scientific satellite which has the mission to study roentgen sources. One main requirement for the Attitude and Orbit Control Subsystem (AOCS) is to achieve and maintain a stable pointing accuracy with a limit cycle of less than 90 arcsec during pointings of maximal 28 hours. The main SAX instrument, the Narrow Field Instrument, is highly sensitive to (indirect) radiation coming from the Sun. This sensitivity leads to another main requirement that under no circumstances the safe attitude domain may be left. The paper describes the application software in relation with the overall SAX AOCS subsystem, the CASE tools that have been used during the development, some advantages and disadvantages of the use of these tools, the measures taken to meet the more or less conflicting requirements of reliability and flexibility, and the lessons learned during development. The quality of the approach to the development has proven the (separately executed) hardware/software integration tests. During these tests, a neglectible number of software errors has been detected in the application software.

  20. Preoperative Planning of Orthopedic Procedures using Digitalized Software Systems.

    PubMed

    Steinberg, Ely L; Segev, Eitan; Drexler, Michael; Ben-Tov, Tomer; Nimrod, Snir

    2016-06-01

    The progression from standard celluloid films to digitalized technology led to the development of new software programs to fulfill the needs of preoperative planning. We describe here preoperative digitalized programs and the variety of conditions for which those programs can be used to facilitate preparation for surgery. A PubMed search using the keywords "digitalized software programs," "preoperative planning" and "total joint arthroplasty" was performed for all studies regarding preoperative planning of orthopedic procedures that were published from 1989 to 2014 in English. Digitalized software programs are enabled to import and export all picture archiving communication system (PACS) files (i.e., X-rays, computerized tomograms, magnetic resonance images) from either the local working station or from any remote PACS. Two-dimension (2D) and 3D CT scans were found to be reliable tools with a high preoperative predicting accuracy for implants. The short learning curve, user-friendly features, accurate prediction of implant size, decreased implant stocks and low-cost maintenance makes digitalized software programs an attractive tool in preoperative planning of total joint replacement, fracture fixation, limb deformity repair and pediatric skeletal disorders.

  1. Development of an automated asbestos counting software based on fluorescence microscopy.

    PubMed

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  2. Automated real-time software development

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.

    1993-01-01

    A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.

  3. Symposium on the Interface: Computing Science and Statistics (20th). Theme: Computationally Intensive Methods in Statistics Held in Reston, Virginia on April 20-23, 1988

    DTIC Science & Technology

    1988-08-20

    34 William A. Link, Patuxent Wildlife Research Center "Increasing reliability of multiversion fault-tolerant software design by modulation," Junryo 3... Multiversion lault-Tolerant Software Design by Modularization Junryo Miyashita Department of Computer Science California state University at san Bernardino Fault...They shall beE refered to as " multiversion fault-tolerant software design". Onel problem of developing multi-versions of a program is the high cost

  4. Computer-assisted design of flux-cored wires

    NASA Astrophysics Data System (ADS)

    Dubtsov, Yu N.; Zorin, I. V.; Sokolov, G. N.; Antonov, A. A.; Artem'ev, A. A.; Lysak, V. I.

    2017-02-01

    The algorithm and description of the AlMe-WireLaB software for the computer-assisted design of flux-cored wires are introduced. The software functionality is illustrated with the selection of the components for the flux-cored wire, ensuring the acquisition of the deposited metal of the Fe-Cr-C-Mo-Ni-Ti-B system. It is demonstrated that the developed software enables the technologically reliable flux-cored wire to be designed for surfacing, resulting in a metal of an ordered composition.

  5. Cyber security best practices for the nuclear industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badr, I.

    2012-07-01

    When deploying software based systems, such as, digital instrumentation and controls for the nuclear industry, it is vital to include cyber security assessment as part of architecture and development process. When integrating and delivering software-intensive systems for the nuclear industry, engineering teams should make use of a secure, requirements driven, software development life cycle, ensuring security compliance and optimum return on investment. Reliability protections, data loss prevention, and privacy enforcement provide a strong case for installing strict cyber security policies. (authors)

  6. Software Technology for Adaptable, Reliable Systems (STARS). Software Architecture Seminar Report: Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-01-29

    other processes, but that he arrived at his results in a different manner. Batory didn’t start with idioms; he performed a domain analysis and...abstracted idioms. Through domain analysis and domain modeling, new idioms can be found and the form of architecture can be the same. It was also questioned...Programming 5. Consensus Definition of Architecture 6. Inductive Analysis of Current Exemplars 7. VHDL (Bailor) 8. Ontological Structuring 3.3.3

  7. ROCKETSHIP: a flexible and modular software tool for the planning, processing and analysis of dynamic MRI studies.

    PubMed

    Barnes, Samuel R; Ng, Thomas S C; Santa-Maria, Naomi; Montagne, Axel; Zlokovic, Berislav V; Jacobs, Russell E

    2015-06-16

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI. A public release of ROCKETSHIP is available at https://github.com/petmri/ROCKETSHIP .

  8. 2005 8th Annual Systems Engineering Conference. Volume 4, Thursday

    DTIC Science & Technology

    2005-10-27

    requirements, allocation , and utilization statistics Operations Decisions Acquisition Decisions Resource Management — Integrated Requirements/ Allocation ...Quality Improvement Consultants, Inc. “Automated Software Testing Increases Test Quality and Coverage Resulting in Improved Software Reliability.”, Mr...Steven Ligon, SAIC The Return of Discipline, Ms. Jacqueline Townsend, Air Force Materiel Command Track 4 - Net Centric Operations: Testing Net-Centric

  9. 75 FR 71625 - System Restoration Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... processing software should be filed in native applications or print-to-PDF format, and not in a scanned... (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). 6. On March 16, 2007, the... electronically using word processing software should be filed in native applications or print-to-PDF format, and...

  10. Software Technology for Adaptable Reliable Systems (STARS) Workshop Held at the Naval Research Laboratory, Washington, DC on April 9-12 1985

    DTIC Science & Technology

    1985-01-01

    paths? .%* vii * * ... * r -. . . .W. -t. ’ PREFACE ......... H*o . .. . ON.........................NT .. . . . . . . . . . . ............ . ........... l...REUSE ................................................ 83 Dr. Bruce A. Burton and Mr. Michael D. Broido REUSABLE COMPONENT DEFINITION (A TUTORIAL...209 Michael R . Miller, Hans L. Hiabereder, and L.O. Keeler REUSABLE SOFTWARE IN SIMULATION APPLICATIONS

  11. Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.

    PubMed

    Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay

    2016-11-01

    Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.

  12. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  13. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  14. Software Development Processes Applied to Computational Icing Simulation

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Potapezuk, Mark G.; Mellor, Pamela A.

    1999-01-01

    The development of computational icing simulation methods is making the transition form the research to common place use in design and certification efforts. As such, standards of code management, design validation, and documentation must be adjusted to accommodate the increased expectations of the user community with respect to accuracy, reliability, capability, and usability. This paper discusses these concepts with regard to current and future icing simulation code development efforts as implemented by the Icing Branch of the NASA Lewis Research Center in collaboration with the NASA Lewis Engineering Design and Analysis Division. With the application of the techniques outlined in this paper, the LEWICE ice accretion code has become a more stable and reliable software product.

  15. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  16. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  17. Correlation and agreement of a digital and conventional method to measure arch parameters.

    PubMed

    Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika

    2018-01-01

    The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.

  18. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  19. The Preliminary Results of GMSTech: A Software Development for Microseismic Characterization

    NASA Astrophysics Data System (ADS)

    Rohaman, Maman; Suhendi, Cahli; Verdhora Ry, Rexha; Sugiartono Prabowo, Billy; Widiyantoro, Sri; Nugraha, Andri Dian; Yudistira, Tedi; Mujihardi, Bambang

    2017-04-01

    The processing of microseismic data requires reliable software for imaging the condition of subsurface related to occurring microseismicity. In general, the currently available software is only specific for certain processing module and developed by the different developer. However, the software with integrated processing modules will give a better value because the users can use it easier and faster. We developed GMSTech (Ganesha Microseismic Technology), a C# language-based standing-alone software consisting several modules for processing of microseismic data. Its function is to solve a non-linear inverse problem and imaging the subsurface. C# library is supported by ILNumerics to reduce time consumption and give good visualization. In this preliminary result, we will present four developed modules: (1) hypocenter determination, (2) moment magnitude calculation, and (3) 3D seismic tomography. In the first module, we provide four methods for locating the microseismic events that can be chosen by a user independently: simulated annealing method, guided grid-search method, Geiger’s method, and joint hypocenter determination (JHD). The second module can be used for calculating moment magnitude using Brune method and to estimate the released energy of the event. At last, we also provided the module of 3-D seismic tomography for imaging the velocity structures based on delay time tomography. We demonstrated the software using both a synthetic data and a real data from a certain geothermal field in Indonesia. The results for all modules are reliable and remarkable, reviewed statistically by RMS error. We will keep examining the software using another set of data and developing further modules of processing.

  20. Preliminary experience with SpineEOS, a new software for 3D planning in AIS surgery.

    PubMed

    Ferrero, Emmanuelle; Mazda, Keyvan; Simon, Anne-Laure; Ilharreborde, Brice

    2018-04-24

    Preoperative planning of scoliosis surgery is essential in the effective treatment of spine pathology. Thus, precontoured rods have been recently developed to avoid iatrogenic sagittal misalignment and rod breakage. Some specific issues exist in adolescent idiopathic scoliosis (AIS), such as a less distal lower instrumented level, a great variability in the location of inflection point (transition from lumbar lordosis to thoracic kyphosis), and sagittal correction is limited by both bone-implant interface. Since 2007, stereoradiographic imaging system is used and allows for 3D reconstructions. Therefore, a software was developed to perform preoperative 3D surgical planning and to provide rod's shape and length. The goal of this preliminary study was to assess the feasibility, reliability, and the clinical relevance of this new software. Retrospective study on 47 AIS patients operated with the same surgical technique: posteromedial translation through posterior approach with lumbar screws and thoracic sublaminar bands. Pre- and postoperatively, 3D reconstructions were performed on stereoradiographic images (EOS system, Paris, France) and compared. Then, the software was used to plan the surgical correction and determine rod's shape and length. Simulated spine and rods were compared to postoperative real 3D reconstructions. 3D reconstructions and planning were performed by an independent observer. 3D simulations were performed on the 47 patients. No difference was found between the simulated model and the postoperative 3D reconstructions in terms of sagittal parameters. Postoperatively, 21% of LL were not within reference values. Postoperative SVA was 20 mm anterior in 2/3 of the cases. Postoperative rods were significantly longer than precontoured rods planned with the software (mean 10 mm). Inflection points were different on the rods used and the planned rods (2.3 levels on average). In this preliminary study, the software based on 3D stereoradiography low-dose system used to plan AIS surgery seems reliable for preoperative planning and precontoured rods. It is an interesting tool to improve surgeons' practice, since 3D planning is expected to reduce complications such as iatrogenic malalignment and to help for a better understanding of the complications, choosing the location of the transitional vertebra. However, further work is needed to improve thoracic kyphosis planning. These slides can be retrieved under Electronic Supplementary Material.

  1. Reducing errors from the electronic transcription of data collected on paper forms: a research data case study.

    PubMed

    Wahi, Monika M; Parks, David V; Skeate, Robert C; Goldin, Steven B

    2008-01-01

    We conducted a reliability study comparing single data entry (SE) into a Microsoft Excel spreadsheet to entry using the existing forms (EF) feature of the Teleforms software system, in which optical character recognition is used to capture data off of paper forms designed in non-Teleforms software programs. We compared the transcription of data from multiple paper forms from over 100 research participants representing almost 20,000 data entry fields. Error rates for SE were significantly lower than those for EF, so we chose SE for data entry in our study. Data transcription strategies from paper to electronic format should be chosen based on evidence from formal evaluations, and their design should be contemplated during the paper forms development stage.

  2. Reducing Errors from the Electronic Transcription of Data Collected on Paper Forms: A Research Data Case Study

    PubMed Central

    Wahi, Monika M.; Parks, David V.; Skeate, Robert C.; Goldin, Steven B.

    2008-01-01

    We conducted a reliability study comparing single data entry (SE) into a Microsoft Excel spreadsheet to entry using the existing forms (EF) feature of the Teleforms software system, in which optical character recognition is used to capture data off of paper forms designed in non-Teleforms software programs. We compared the transcription of data from multiple paper forms from over 100 research participants representing almost 20,000 data entry fields. Error rates for SE were significantly lower than those for EF, so we chose SE for data entry in our study. Data transcription strategies from paper to electronic format should be chosen based on evidence from formal evaluations, and their design should be contemplated during the paper forms development stage. PMID:18308994

  3. Software-assisted small bowel motility analysis using free-breathing MRI: feasibility study.

    PubMed

    Bickelhaupt, Sebastian; Froehlich, Johannes M; Cattin, Roger; Raible, Stephan; Bouquet, Hanspeter; Bill, Urs; Patak, Michael A

    2014-01-01

    To validate a software prototype allowing for small bowel motility analysis in free breathing by comparing it to manual measurements. In all, 25 patients (15 male, 10 female; mean age 39 years) were included in this Institutional Review Board-approved, retrospective study. Magnetic resonance imaging (MRI) was performed on a 1.5T system after standardized preparation acquiring motility sequences in free breathing over 69-84 seconds. Small bowel motility was analyzed manually and with the software. Functional parameters, measurement time, and reproducibility were compared using the coefficient of variance and paired Student's t-test. Correlation was analyzed using Pearson's correlation coefficient and linear regression. The 25 segments were analyzed twice both by hand and using the software with automatic breathing correction. All assessed parameters significantly correlated between the methods (P < 0.01), but the scattering of repeated measurements was significantly (P < 0.01) lower using the software (3.90%, standard deviation [SD] ± 5.69) than manual examinations (9.77%, SD ± 11.08). The time needed was significantly less (P < 0.001) with the software (4.52 minutes, SD ± 1.58) compared to manual measurement, lasting 17.48 minutes for manual (SD ± 1.75 minutes). The use of the software proves reliable and faster small bowel motility measurements in free-breathing MRI compared to manual analyses. The new technique allows for analyses of prolonged sequences acquired in free breathing, improving the informative value of the examinations by amplifying the evaluable data. Copyright © 2013 Wiley Periodicals, Inc.

  4. Semi-automatic computerized approach to radiological quantification in rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Steiner, Wolfgang; Schoeffmann, Sylvia; Prommegger, Andrea; Boegl, Karl; Klinger, Thomas; Peloschek, Philipp; Kainberger, Franz

    2004-04-01

    Rheumatoid Arthritis (RA) is a common systemic disease predominantly involving the joints. Precise diagnosis and follow-up therapy requires objective quantification. For this purpose, radiological analyses using standardized scoring systems are considered to be the most appropriate method. The aim of our study is to develop a semi-automatic image analysis software, especially applicable for scoring of joints in rheumatic disorders. The X-Ray RheumaCoach software delivers various scoring systems (Larsen-Score and Ratingen-Rau-Score) which can be applied by the scorer. In addition to the qualitative assessment of joints performed by the radiologist, a semi-automatic image analysis for joint detection and measurements of bone diameters and swollen tissue supports the image assessment process. More than 3000 radiographs from hands and feet of more than 200 RA patients were collected, analyzed, and statistically evaluated. Radiographs were quantified using conventional paper-based Larsen score and the X-Ray RheumaCoach software. The use of the software shortened the scoring time by about 25 percent and reduced the rate of erroneous scorings in all our studies. Compared to paper-based scoring methods, the X-Ray RheumaCoach software offers several advantages: (i) Structured data analysis and input that minimizes variance by standardization, (ii) faster and more precise calculation of sum scores and indices, (iii) permanent data storing and fast access to the software"s database, (iv) the possibility of cross-calculation to other scores, (v) semi-automatic assessment of images, and (vii) reliable documentation of results in the form of graphical printouts.

  5. Reliability and validity assessment of gastrointestinal dystemperaments questionnaire: a novel scale in Persian traditional medicine

    PubMed Central

    Hoseinzadeh, Hamidreza; Taghipour, Ali; Yousefi, Mahdi

    2018-01-01

    Background Development of a questionnaire based on the resources of Persian traditional medicine seems necessary. One of the problems faced by practitioners of traditional medicine is the different opinions regarding the diagnosis of general temperament or temperament of member. One of the reasons is the lack of validity tools, and it has led to difficulties in training the student of traditional medicine and the treatment of patients. The differences in the detection methods, have given rise to several treatment methods. Objective The present study aimed to develop a questionnaire and standard software for diagnosis of gastrointestinal dystemperaments. Methods The present research is a tool developing study which included 8 stages of developing the items, determining the statements based on items, assessing the face validity, assessing the content validity, assessing the reliability, rating the items, developing a software for calculation of the total score of the questionnaire named GDS v.1.1, and evaluating the concurrent validity using statistical tests including Cronbach’s alpha coefficient, Cohen’s kappa coefficient. Results Based on the results, 112 notes including 62 symptoms were extracted from resources, and 58 items were obtained from in-person interview sessions with a panel of experts. A statement was selected for each item and, after merging a number of statements, a total of 49 statements were finally obtained. By calculating the score of statement impact and determining the content validity, respectively, 6 and 10 other items were removed from the list of statements. Standardized Cronbach’s alpha for this questionnaire was obtained 0.795 and its concurrent validity was equal to 0.8. Conclusion A quantitative tool was developed for diagnosis and examination of gastrointestinal dystemperaments. The developed questionnaire is adequately reliable and valid for this purpose. In addition, the software can be used for clinical diagnosis. PMID:29629060

  6. [Job stressors in software developers--a comparison with other occupations].

    PubMed

    Kadokura, M

    1997-09-01

    The aim of this study is to investigate the difference in job stressors among software developers, the sales staff and the clerical staff (n = 2,079) in two companies (A Co. and B Co.) using a self-administered questionnaire that included a job stressor scale and the 30-item General Health Questionnaire (GHQ). We developed the job stressor scale based on the interviews with out-patients who engaged in software development and previous studies about job stressors. Factor analysis with a seven-factor solution showed that seven subscales were abstracted from the job stressor scale, namely, quantitative load of work, dissatisfaction with work, demanding work, uneasiness about work, human relations, ambiguity of work and shortage of private time. Each subscale was significantly (r = .313-.442, p < 0.0001) correlated with the GHQ score and proved to be a reliable instrument, as indicated by a Cronbach's alpha of greater than 0.73. Stepwise multiple regression analysis revealed that quantitative load of work and shortage of private time subscale scores were significantly high in software developers in A Co. Software developers in A Co. tended to score higher (P < .10) than the others in demanding work and ambiguity of work subscale. All subscale scores were significantly low in the clerical staff in B Co. There was no significant difference between the sales staff and software developers in B Co. Results of the interviews with out-patients showed that demanding work, hard deadline, ambiguity of work and precarious work would cause trouble in software developers. The implications of these findings with respect to occupational issues related to software developers are discussed.

  7. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  8. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  9. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  10. 77 FR 39858 - Revisions to Electric Reliability Organization Definition of Bulk Electric System and Rules of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ...'' as used in the NERC Glossary. \\25\\ Id. at 15. \\26\\ Id. at 16. 16. NERC also explains that, while the...: Through http://www.ferc.gov . Documents created electronically using word processing software should be...'s Glossary of Terms Used in Reliability Standards (NERC Glossary) developed by the North American...

  11. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.

    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There aremore » two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie Johnson, and Harold Sanborn of the U.S. Army Corps of Engineers Construction Engineering Research Laboratory * Colleagues from Sandia National Laboratories (SNL) for their reviews, suggestions, and participation in the work.« less

  12. A Reliable Service-Oriented Architecture for NASA's Mars Exploration Rover Mission

    NASA Technical Reports Server (NTRS)

    Mak, Ronald; Walton, Joan; Keely, Leslie; Hehner, Dennis; Chan, Louise

    2005-01-01

    The Collaborative Information Portal (CIP) was enterprise software developed jointly by the NASA Ames Research Center and the Jet Propulsion Laboratory (JPL) for NASA's highly successful Mars Exploration Rover (MER) mission. Both MER and CIP have performed far beyond their original expectations. Mission managers and engineers ran CIP inside the mission control room at JPL, and the scientists ran CIP in their laboratories, homes, and offices. All the users connected securely over the Internet. Since the mission ran on Mars time, CIP displayed the current time in various Mars and Earth time zones, and it presented staffing and event schedules with Martian time scales. Users could send and receive broadcast messages, and they could view and download data and image files generated by the rovers' instruments. CIP had a three-tiered, service-oriented architecture (SOA) based on industry standards, including J2EE and web services, and it integrated commercial off-the-shelf software. A user's interactions with the graphical interface of the CIP client application generated web services requests to the CIP middleware. The middleware accessed the back-end data repositories if necessary and returned results for these requests. The client application could make multiple service requests for a single user action and then present a composition of the results. This happened transparently, and many users did not even realize that they were connecting to a server. CIP performed well and was extremely reliable; it attained better than 99% uptime during the course of the mission. In this paper, we present overviews of the MER mission and of CIP. We show how CIP helped to fulfill some of the mission needs and how people used it. We discuss the criteria for choosing its architecture, and we describe how the developers made the software so reliable. CIP's reliability did not come about by chance, but was the result of several key design decisions. We conclude with some of the important lessons we learned form developing, deploying, and supporting the software.

  13. STGT program: Ada coding and architecture lessons learned

    NASA Technical Reports Server (NTRS)

    Usavage, Paul; Nagurney, Don

    1992-01-01

    STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.

  14. Advanced flight control system study

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Wall, J. E., Jr.; Rang, E. R.; Lee, H. P.; Schulte, R. W.; Ng, W. K.

    1982-01-01

    A fly by wire flight control system architecture designed for high reliability includes spare sensor and computer elements to permit safe dispatch with failed elements, thereby reducing unscheduled maintenance. A methodology capable of demonstrating that the architecture does achieve the predicted performance characteristics consists of a hierarchy of activities ranging from analytical calculations of system reliability and formal methods of software verification to iron bird testing followed by flight evaluation. Interfacing this architecture to the Lockheed S-3A aircraft for flight test is discussed. This testbed vehicle can be expanded to support flight experiments in advanced aerodynamics, electromechanical actuators, secondary power systems, flight management, new displays, and air traffic control concepts.

  15. Automated ultrasound edge-tracking software comparable to established semi-automated reference software for carotid intima-media thickness analysis.

    PubMed

    Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J

    2018-05-01

    Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  16. NoSQL Data Store Technologies

    DTIC Science & Technology

    2014-09-01

    NoSQL Data Store Technologies John Klein, Software Engineering Institute Patrick Donohoe, Software Engineering Institute Neil Ernst...REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE NoSQL Data Store Technologies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...distribute data 4. Data Replication – determines how a NoSQL database facilitates reliable, high performance data replication to build

  17. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  18. Reliability Validation and Improvement Framework

    DTIC Science & Technology

    2012-11-01

    systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results

  19. Big Software for SmallSats: Adapting CFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS. Large parts of cFS are now open source, which has spurred adoption outside of NASA. This paper reports on the experiences of two teams using cFS for current CubeSat missions. The performance overheads of cFS are quantified, and the reusability of code between missions is discussed. The analysis shows that cFS is well suited to use on CubeSats and demonstrates the portability and modularity of cFS code.

  20. A new semiquantitative method for evaluation of metastasis progression.

    PubMed

    Volarevic, A; Ljujic, B; Volarevic, V; Milovanovic, M; Kanjevac, T; Lukic, A; Arsenijevic, N

    2012-01-01

    Although recent technical advancements are directed toward developing novel assays and methods for detection of micro and macro metastasis, there are still no reports of reliable, simple to use imaging software that could be used for the detection and quantification of metastasis in tissue sections. We herein report a new semiquantitative method for evaluation of metastasis progression in a well established 4T1 orthotopic mouse model of breast cancer metastasis. The new semiquantitative method presented here was implemented by using the Autodesk AutoCAD 2012 program, a computer-aided design program used primarily for preparing technical drawings in 2 dimensions. By using the Autodesk AutoCAD 2012 software- aided graphical evaluation we managed to detect each metastatic lesion and we precisely calculated the average percentage of lung and liver tissue parenchyma with metastasis in 4T1 tumor-bearing mice. The data were highly specific and relevant to descriptive histological analysis, confirming reliability and accuracy of the AutoCAD 2012 software as new method for quantification of metastatic lesions. The new semiquantitative method using AutoCAD 2012 software provides a novel approach for the estimation of metastatic progression in histological tissue sections.

  1. Presenting an evaluation model of the trauma registry software.

    PubMed

    Asadi, Farkhondeh; Paydar, Somayeh

    2018-04-01

    Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software and sub criteria related to each main criteria separately. This model was validated by experts in this field. Therefore, this model can be used as a comprehensive model and a standard evaluation tool for measuring efficiency and effectiveness and performance improvement of trauma registry software. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Artificial intelligence and the space station software support environment

    NASA Technical Reports Server (NTRS)

    Marlowe, Gilbert

    1986-01-01

    In a software system the size of the Space Station Software Support Environment (SSE), no one software development or implementation methodology is presently powerful enough to provide safe, reliable, maintainable, cost effective real time or near real time software. In an environment that must survive one of the most harsh and long life times, software must be produced that will perform as predicted, from the first time it is executed to the last. Many of the software challenges that will be faced will require strategies borrowed from Artificial Intelligence (AI). AI is the only development area mentioned as an example of a legitimate reason for a waiver from the overall requirement to use the Ada programming language for software development. The limits are defined of the applicability of the Ada language Ada Programming Support Environment (of which the SSE is a special case), and software engineering to AI solutions by describing a scenario that involves many facets of AI methodologies.

  3. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  4. Software fault tolerance for real-time avionics systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.

  5. Validity and reliability of bilingual English-Arabic version of Schutte self report emotional intelligence scale in an undergraduate Arab medical student sample.

    PubMed

    Naeem, Naghma; Muijtjens, Arno

    2015-04-01

    The psychological construct of emotional intelligence (EI), its theoretical models, measurement instruments and applications have been the subject of several research studies in health professions education. The objective of the current study was to investigate the factorial validity and reliability of a bilingual version of the Schutte Self Report Emotional Intelligence Scale (SSREIS) in an undergraduate Arab medical student population. The study was conducted during April-May 2012. A cross-sectional survey design was employed. A sample (n = 467) was obtained from undergraduate medical students belonging to the male and female medical college of King Saud University, Riyadh, Saudi Arabia. Exploratory and confirmatory factor analysis was performed using SPSS 16.0 and AMOS 4.0 statistical software to determine the factor structure. Reliability was determined using Cronbach's alpha statistics. The results obtained using an undergraduate Arab medical student sample supported a multidimensional; three factor structure of the SSREIS. The three factors are Optimism, Awareness-of-Emotions and Use-of-Emotions. The reliability (Cronbach's alpha) for the three subscales was 0.76, 0.72 and 0.55, respectively. Emotional intelligence is a multifactorial construct (three factors). The bilingual version of the SSREIS is a valid and reliable measure of trait emotional intelligence in an undergraduate Arab medical student population.

  6. Analysis of the reliability and reproducibility of goniometry compared to hand photogrammetry

    PubMed Central

    de Carvalho, Rosana Martins Ferreira; Mazzer, Nilton; Barbieri, Claudio Henrique

    2012-01-01

    Objective: To evaluate the intra- and inter-examiner reliability and reproducibility of goniometry in relation to photogrammetry of hand, comparing the angles of thumb abduction, PIP joint flexion of the II finger and MCP joint flexion of the V finger. Methods: The study included 30 volunteers, who were divided into three groups: one group of 10 physiotherapy students, one group of 10 physiotherapists, and a third group of 10 therapists of the hand. Each examiner performed the measurements on the same hand mold, using the goniometer followed by two photogrammetry software programs; CorelDraw® and ALCimagem®. Results: The results revealed that the groups and the methods proposed presented inter-examiner reliability, generally rated as excellent (ICC 0.998 I.C. 95% 0.995 - 0.999). In the intra-examiner evaluation, an excellent level of reliability was found between the three groups. In the comparison between groups for each angle and each method, no significant differences were found between the groups for most of the measurements. Conclusion: Goniometry and photogrammetry are reliable and reproducible methods for evaluating measurements of the hand. However, due to the lack of similar references, detailed studies are needed to define the normal parameters between the methods in the joints of the hand. Level of Evidence II, Diagnostic Study. PMID:24453594

  7. Internet-based videoconferencing and data collaboration for the imaging community.

    PubMed

    Poon, David P; Langkals, John W; Giesel, Frederik L; Knopp, Michael V; von Tengg-Kobligk, Hendrik

    2011-01-01

    Internet protocol-based digital data collaboration with videoconferencing is not yet well utilized in the imaging community. Videoconferencing, combined with proven low-cost solutions, can provide reliable functionality and speed, which will improve rapid, time-saving, and cost-effective communications, within large multifacility institutions or globally with the unlimited reach of the Internet. The aim of this project was to demonstrate the implementation of a low-cost hardware and software setup that facilitates global data collaboration using WebEx and GoToMeeting Internet protocol-based videoconferencing software. Both products' features were tested and evaluated for feasibility across 2 different Internet networks, including a video quality and recording assessment. Cross-compatibility with an Apple OS is also noted in the evaluations. Departmental experiences with WebEx pertaining to clinical trials are also described. Real-time remote presentation of dynamic data was generally consistent across platforms. A reliable and inexpensive hardware and software setup for complete Internet-based data collaboration/videoconferencing can be achieved.

  8. Software Innovation in a Mission Critical Environment

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven

    2015-01-01

    Operating in mission-critical environments requires trusted solutions, and the preference for "tried and true" approaches presents a potential barrier to infusing innovation into mission-critical systems. This presentation explores opportunities to overcome this barrier in the software domain. It outlines specific areas of innovation in software development achieved by the Johnson Space Center (JSC) Engineering Directorate in support of NASA's major human spaceflight programs, including International Space Station, Multi-Purpose Crew Vehicle (Orion), and Commercial Crew Programs. Software engineering teams at JSC work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements for genuinely mission critical applications. The innovations described, including the use of NASA Core Flight Software and its associated software tool chain, can lead to software that is more affordable, more reliable, better modelled, more flexible, more easily maintained, better tested, and enabling of automation.

  9. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  10. openECA Platform and Analytics Alpha Test Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  11. openECA Platform and Analytics Beta Demonstration Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  12. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  13. Reliability of digital photography for assessing lower extremity alignment in individuals with flatfeet and normal feet types.

    PubMed

    Ashnagar, Zinat; Hadian, Mohammad Reza; Olyaei, Gholamreza; Talebian Moghadam, Saeed; Rezasoltani, Asghar; Saeedi, Hassan; Yekaninejad, Mir Saeed; Mahmoodi, Rahimeh

    2017-07-01

    The aim of this study was to investigate the intratester reliability of digital photographic method for quantifying static lower extremity alignment in individuals with flatfeet and normal feet types. Thirteen females with flexible flatfeet and nine females with normal feet types were recruited from university communities. Reflective markers were attached over the participant's body landmarks. Frontal and sagittal plane photographs were taken while the participants were in a standardized standing position. The markers were removed and after 30 min the same procedure was repeated. Pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length and tibia length were measured from photographs using the Image j software. All measured variables demonstrated good to excellent intratester reliability using digital photography in both flatfeet (ICC: 0.79-0.93) and normal feet type (ICC: 0.84-0.97) groups. The findings of the current study indicate that digital photography is a highly reliable method of measurement for assessing lower extremity alignment in both flatfeet and normal feet type groups. Copyright © 2016. Published by Elsevier Ltd.

  14. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  15. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  16. A newly developed spinal simulator.

    PubMed

    Chester, R; Watson, M J

    2000-11-01

    A number of studies indicate poor intra-therapist and inter-therapist reliability in the performance of graded, passive oscillatory movements to the lumbar spine. However, it has been suggested that therapists can be trained to be more consistent in their performance of these techniques if given reliable quantitative feedback. The intention of this study was to develop equipment, analogous to the lumbar spine that could be used for both teaching and research purposes. Equipment has been updated and connected to a personal IBM compatible computer. Custom designed software allows concurrent and accurate feedback to students on their performance and in a form suitable for advanced data analysis using statistical packages. The uses and implications of this equipment are discussed. Copyright 2000 Harcourt Publishers Ltd.

  17. Development of Airport Noise Mapping using Matlab Software (Case Study: Adi Soemarmo Airport - Boyolali, Indonesia)

    NASA Astrophysics Data System (ADS)

    Andarani, Pertiwi; Setiyo Huboyo, Haryono; Setyanti, Diny; Budiawan, Wiwik

    2018-02-01

    Noise is considered as one of the main environmental impact of Adi Soemarmo International Airport (ASIA), the second largest airport in Central Java Province, Indonesia. In order to manage the noise of airport, airport noise mapping is necessary. However, a model that requires simple input but still reliable was not available in ASIA. Therefore, the objective of this study are to develop model using Matlab software, to verify its reliability by measuring actual noise exposure, and to analyze the area of noise levels‥ The model was developed based on interpolation or extrapolation of identified Noise-Power-Distance (NPD) data. In accordance with Indonesian Government Ordinance No.40/2012, the noise metric used is WECPNL (Weighted Equivalent Continuous Perceived Noise Level). Based on this model simulation, there are residence area in the region of noise level II (1.912 km2) and III (1.16 km2) and 18 school buildings in the area of noise levels I, II, and III. These land-uses are actually prohibited unless noise insulation is equipped. The model using Matlab in the case of Adi Soemarmo International Airport is valid based on comparison of the field measurement (6 sampling points). However, it is important to validate the model again once the case study (the airport) is changed.

  18. A Benefit Analysis of Infusing Wireless into Aircraft and Fleet Operations - Report to Seedling Project Efficient Reconfigurable Cockpit Design and Fleet Operations Using Software Intensive, Network Enabled, Wireless Architecture (ECON)

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Holmes, Bruce J.; Hahn, Andrew S.

    2016-01-01

    We report on an examination of potential benefits of infusing wireless technologies into various areas of aircraft and airspace operations. The analysis is done in support of a NASA seedling project Efficient Reconfigurable Cockpit Design and Fleet Operations Using Software Intensive, Network Enabled Wireless Architecture (ECON). The study has two objectives. First, we investigate one of the main benefit hypotheses of the ECON proposal: that the replacement of wired technologies with wireless would lead to significant weight reductions on an aircraft, among other benefits. Second, we advance a list of wireless technology applications and discuss their system benefits. With regard to the primary hypothesis, we conclude that the promise of weight reduction is premature. Specificity of the system domain and aircraft, criticality of components, reliability of wireless technologies, the weight of replacement or augmentation equipment, and the cost of infusion must all be taken into account among other considerations, to produce a reliable estimate of weight savings or increase.

  19. A hardware-software system for the automation of verification and calibration of oil metering units secondary equipment

    NASA Astrophysics Data System (ADS)

    Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.

    2017-08-01

    In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.

  20. Parametric Design and Mechanical Analysis of Beams based on SINOVATION

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Shen, W. D.; Yang, D. Y.; Liu, W. M.

    2017-07-01

    In engineering practice, engineer needs to carry out complicated calculation when the loads on the beam are complex. The processes of analysis and calculation take a lot of time and the results are unreliable. So VS2005 and ADK are used to develop a software for beams design based on the 3D CAD software SINOVATION with C ++ programming language. The software can realize the mechanical analysis and parameterized design of various types of beams and output the report of design in HTML format. Efficiency and reliability of design of beams are improved.

  1. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  2. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  3. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Gregory, S. T.; Urquhart, J. I. A.

    1985-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigated. In particular, the concept that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware was examined. Progress is discussed for the following areas: continued development and testing of the fault-tolerant Ada testbed; development of suggested changes to Ada so that it might more easily cope with the failure of interest; and design of new approaches to fault-tolerant software in real-time systems, and integration of these ideas into Ada.

  4. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  5. The Azimuth Project: an Open-Access Educational Resource

    NASA Astrophysics Data System (ADS)

    Baez, J. C.

    2012-12-01

    The Azimuth Project is an online collaboration of scientists, engineers and programmers who are volunteering their time to do something about a wide range of environmental problems. The project has several aspects: 1) a wiki designed to make reliable, sourced information easy to find and accessible to a technically literate nonexperts, 2) a blog featuring expository articles and news items, 3) a project to write programs that explain basic concepts of climate physics and illustrate principles of good open-source software design, and 4) a project to develop mathematical tools for studying complex networked systems. We discuss the progress so far and some preliminary lessons. For example, enlisting the help of experts outside academia highlights the problems with pay-walled journals and the benefits of open access, as well as differences between how software development is done commercially, in the free software community, and in academe.

  6. Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin

    A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.

  7. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  8. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  9. Software Carpentry In The Hydrological Sciences

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice. In 2014, Software Carpentry workshops targeting earth/environmental sciences and hydrological modeling have been organized and run at the Massachusetts Institute of Technology, the US Army Corps of Engineers, the Community Surface Dynamics Modeling System Annual Meeting, and the Earth Science Information Partners Summer Meeting. In this presentation, we will share some of the successes in teaching this material, as well as discuss and present instructional material specific to hydrological modeling.

  10. A Study on Signal Group Processing of AUTOSAR COM Module

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Hwan; Hwang, Hyun Yong; Han, Tae Man; Ahn, Yong Hak

    2013-06-01

    In vehicle, there are many ECU(Electronic Control Unit)s, and ECUs are connected to networks such as CAN, LIN, FlexRay, and so on. AUTOSAR COM(Communication) which is a software platform of AUTOSAR(AUTomotive Open System ARchitecture) in the international industry standards of automotive electronic software processes signals and signal groups for data communications between ECUs. Real-time and reliability are very important for data communications in the vehicle. Therefore, in this paper, we analyze functions of signals and signal groups used in COM, and represent that functions of signal group are more efficient than signals in real-time data synchronization and network resource usage between the sender and receiver.

  11. Specvis: Free and open-source software for visual field examination.

    PubMed

    Dzwiniel, Piotr; Gola, Mateusz; Wójcik-Gryciuk, Anna; Waleszczyk, Wioletta J

    2017-01-01

    Visual field impairment affects more than 100 million people globally. However, due to the lack of the access to appropriate ophthalmic healthcare in undeveloped regions as a result of associated costs and expertise this number may be an underestimate. Improved access to affordable diagnostic software designed for visual field examination could slow the progression of diseases, such as glaucoma, allowing for early diagnosis and intervention. We have developed Specvis, a free and open-source application written in Java programming language that can run on any personal computer to meet this requirement (http://www.specvis.pl/). Specvis was tested on glaucomatous, retinitis pigmentosa and stroke patients and the results were compared to results using the Medmont M700 Automated Static Perimeter. The application was also tested for inter-test intrapersonal variability. The results from both validation studies indicated low inter-test intrapersonal variability, and suitable reliability for a fast and simple assessment of visual field impairment. Specvis easily identifies visual field areas of zero sensitivity and allows for evaluation of its levels throughout the visual field. Thus, Specvis is a new, reliable application that can be successfully used for visual field examination and can fill the gap between confrontation and perimetry tests. The main advantages of Specvis over existing methods are its availability (free), affordability (runs on any personal computer), and reliability (comparable to high-cost solutions).

  12. Specvis: Free and open-source software for visual field examination

    PubMed Central

    Dzwiniel, Piotr; Gola, Mateusz; Wójcik-Gryciuk, Anna

    2017-01-01

    Visual field impairment affects more than 100 million people globally. However, due to the lack of the access to appropriate ophthalmic healthcare in undeveloped regions as a result of associated costs and expertise this number may be an underestimate. Improved access to affordable diagnostic software designed for visual field examination could slow the progression of diseases, such as glaucoma, allowing for early diagnosis and intervention. We have developed Specvis, a free and open-source application written in Java programming language that can run on any personal computer to meet this requirement (http://www.specvis.pl/). Specvis was tested on glaucomatous, retinitis pigmentosa and stroke patients and the results were compared to results using the Medmont M700 Automated Static Perimeter. The application was also tested for inter-test intrapersonal variability. The results from both validation studies indicated low inter-test intrapersonal variability, and suitable reliability for a fast and simple assessment of visual field impairment. Specvis easily identifies visual field areas of zero sensitivity and allows for evaluation of its levels throughout the visual field. Thus, Specvis is a new, reliable application that can be successfully used for visual field examination and can fill the gap between confrontation and perimetry tests. The main advantages of Specvis over existing methods are its availability (free), affordability (runs on any personal computer), and reliability (comparable to high-cost solutions). PMID:29028825

  13. Reusable Rack Interface Controller Common Software for Various Science Research Racks on the International Space Station

    NASA Technical Reports Server (NTRS)

    Lu, George C.

    2003-01-01

    The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.

  14. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .

  15. Reliability of skeletal maturity analysis using the cervical vertebrae maturation method on dedicated software.

    PubMed

    Padalino, Saverio; Sfondrini, Maria Francesca; Chenuil, Laura; Scudeller, Luigia; Gandini, Paola

    2014-12-01

    The aim of this study was to assess the feasibility of skeletal maturation analysis using the Cervical Vertebrae Maturation (CVM) method by means of dedicated software, developed in collaboration with Outside Format (Paullo-Milan), as compared with manual analysis. From a sample of patients aged 7-21 years, we gathered 100 lateral cephalograms, 20 for each of the five CVM stages. For each cephalogram, we traced cervical vertebrae C2, C3 and C4 by hand using a lead pencil and an acetate sheet and dedicated software. All the tracings were made by an experienced operator (a dentofacial orthopedics resident) and by an inexperienced operator (a student in dental surgery). Each operator recorded the time needed to make each tracing in order to demonstrate differences in the times taken. Concordance between the manual analysis and the analysis performed using the dedicated software was 94% for the resident and 93% for the student. Interobserver concordance was 99%. The hand-tracing was quicker than that performed by means of the software (28 seconds more on average). The cervical vertebrae analysis software offers excellent clinical performance, even if the method takes longer than the manual technique. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. Computer-Aided Design for Built-In-Test (CADBIT) - Software Specification. Volume 3

    DTIC Science & Technology

    1989-10-01

    CADD COMAN WIDO IO-CNURET MSL PPLYING~ TET ATEN Figur 3-13- TUTORIA FIGUR PLCMI IN CAD-NVIOMENT ON BOAR SEFTST- 1"os-rnenu, long-tur d nnnih que- list...have software package for reliability calculation A-8 LIBRARY ELEMENT DATA SHE T’" BIT TECHNIQUE: ON-BOARD ROM CATEGORY: L’ONG TUTORIA PAGE ,5 of 14

  17. A Review of Software Maintenance Technology.

    DTIC Science & Technology

    1980-02-01

    LABS HONEYWELL, REF BURROUGHS, OTHERS BOOLE £ LANGUAGE "PE BABBAGE , OPERATIONAL INDEPENDENT 4.2.17 INC. IBM H M RELIABILITY MOST LARGE U. MEASUREMENT...80-13 ML llluuluunuuuuu SmeeI..... f. Maintenance Experience (1) Multiple Implementation Charles Holmes (Source 2) described two attempts at McDonnell...proprietary software monitor package distributed by Boole and Babbage , Inc., Sunnyvale, California. it has been implemented on IBM computers and is language

  18. Development of a video-simulation instrument for assessing cognition in older adults.

    PubMed

    Ip, Edward H; Barnard, Ryan; Marshall, Sarah A; Lu, Lingyi; Sink, Kaycee; Wilson, Valerie; Chamberlain, Dana; Rapp, Stephen R

    2017-12-06

    Commonly used methods to assess cognition, such as direct observation, self-report, or neuropsychological testing, have significant limitations. Therefore, a novel tablet computer-based video simulation was created with the goal of being valid, reliable, and easy to administer. The design and implementation of the SIMBAC (Simulation-Based Assessment of Cognition) instrument is described in detail, as well as informatics "lessons learned" during development. The software emulates 5 common instrumental activities of daily living (IADLs) and scores participants' performance. The modules were chosen by a panel of geriatricians based on relevance to daily functioning and ability to be modeled electronically, and included facial recognition, pairing faces with the correct names, filling a pillbox, using an automated teller machine (ATM), and automatic renewal of a prescription using a telephone. Software development included three phases 1) a period of initial design and testing (alpha version), 2) pilot study with 10 cognitively normal and 10 cognitively impaired adults over the age of 60 (beta version), and 3) larger validation study with 162 older adults of mixed cognitive status (release version). Results of the pilot study are discussed in the context of refining the instrument; full results of the validation study are reported in a separate article. In both studies, SIMBAC reliably differentiated controls from persons with cognitive impairment, and performance was highly correlated with Mini Mental Status Examination (MMSE) score. Several informatics challenges emerged during software development, which are broadly relevant to the design and use of electronic assessment tools. Solutions to these issues, such as protection of subject privacy and safeguarding against data loss, are discussed in depth. Collection of fine-grained data (highly detailed information such as time spent reading directions and the number of taps on screen) is also considered. SIMBAC provides clinicians direct insight into whether subjects can successfully perform selected cognitively intensive activities essential for independent living and advances the field of cognitive assessment. Insight gained from the development process could inform other researchers who seek to develop software tools in health care.

  19. Shuttle/Agena study. Annex A: Ascent agena configuration

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Details are presented on the Agena rocket vehicle description, vehicle interfaces, environmental constraints and test requirements, software programs, and ground support equipment. The basic design concept for the Ascent Agena is identified as optimization of reliability, flexibility, performance capabilities, and economy through the use of tested and flight-proven hardware. The development history of the Agenas A, B, and D is outlined and space applications are described.

  20. Software for Improved Extraction of Data From Tape Storage

    NASA Technical Reports Server (NTRS)

    Cheng, Chiu-Fu

    2003-01-01

    A computer program has been written to replace the original software of Racal Storeplex Delta tape recorders, which are used at Stennis Space Center. The original software could be activated by a command- line interface only; the present software offers the option of a command-line or graphical user interface. The present software also offers the option of batch-file operation (activation by a file that contains command lines for operations performed consecutively). The present software is also more reliable than was the original software: The original software was plagued by several deficiencies that made it difficult to execute, modify, and test. In addition, when using the original software to extract data that had been recorded within specified intervals of time, the resolution with which one could control starting and stopping times was no finer than about a second (or, in some cases, several seconds). In contrast, the present software is capable of controlling playback times to within 1/100 second of times specified by the user, assuming that the tape-recorder clock is accurate to within 1/100 second.

  1. Software for Improved Extraction of Data From Tape Storage

    NASA Technical Reports Server (NTRS)

    Cheng, Chiu-Fu

    2002-01-01

    A computer program has been written to replace the original software of Racal Storeplex Delta tape recorders, which are still used at Stennis Space Center but have been discontinued by the manufacturer. Whereas the original software could be activated by a command-line interface only, the present software offers the option of a command-line or graphical user interface. The present software also offers the option of batch-file operation (activation by a file that contains command lines for operations performed consecutively). The present software is also more reliable than was the original software: The original software was plagued by several deficiencies that made it difficult to execute, modify, and test. In addition, when using the original software to extract data that had been recorded within specified intervals of time, the resolution with which one could control starting and stopping times was no finer than about a second (or, in some cases, several seconds). In contrast, the present software is capable of controlling playback times to within 1/100 second of times specified by the user, assuming that the tape-recorder clock is accurate to within 1/100 second.

  2. Application of majority voting and consensus voting algorithms in N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.

  3. Reliability analysis for digital adolescent idiopathic scoliosis measurements.

    PubMed

    Kuklo, Timothy R; Potter, Benjamin K; O'Brien, Michael F; Schroeder, Teresa M; Lenke, Lawrence G; Polly, David W

    2005-04-01

    Analysis of adolescent idiopathic scoliosis (AIS) requires a thorough clinical and radiographic evaluation to completely assess the three-dimensional deformity. Recently, these radiographic parameters have been analyzed for reliability and reproducibility following manual measurements; however, most of these parameters have not been analyzed with regard to digital measurements. The purpose of this study is to determine the intra- and interobserver reliability of common scoliosis radiographic parameters using a digital software measurement program. Thirty sets of preoperative (posteroanterior [PA], lateral, and side-bending [SB]) and postoperative (PA and lateral) radiographs were analyzed by three independent observers on two separate occasions using a software measurement program (PhDx, Albuquerque, NM). Coronal measures included main thoracic (MT) and thoracolumbar-lumbar (TL/L) Cobb, SB MT Cobb, MT and TL/L apical vertical translation (AVT), C7 to center sacral vertical line (CSVL), T1 tilt, LIV tilt, disk below lowest instrumented vertebra (LIV), coronal balance, and Risser, whereas sagittal measures included T2-T5, T5-T12, T2-T12, T10-L2, T12-S1, and sagittal balance. Analysis of variance for repeated measures or Cohen three-way kappa correlation coefficient analysis was performed as appropriate to calculate the intra- and interobserver reliability for each parameter. The majority of the radiographic parameters assessed demonstrated good or excellent intra- and interobserver reliability. The relationship of the LIV to the CSVL (intraobserver kappaa = 0.48-0.78, fair to excellent; interobserver kappaa = 0.34-0.41, fair to poor), interobserver measurement of AVT (rho = 0.49-0.73, low to good), Risser grade (intraobserver rho = 0.41-0.97, low to excellent; interobserver rho = 0.60-0.70, fair to good), intraobserver measurement of the angulation of the disk inferior to the LIV (rho = 0.53-0.88, fair to good), apical Nash-Moe vertebral rotation (intraobserver rho = 0.50-0.85, fair to good; interobserver rho = 0.53-0.59, fair), and especially regional thoracic kyphosis from T2 to T5 (intraobserver rho = 0.22-0.65, poor to fair; interobserver rho = 0.33-0.47, low) demonstrated lesser reliability. In general, preoperative measures demonstrated greater reliability than postoperative measures, and coronal angular measures were more reliable than sagittal measures. Most common radiographic parameters for AIS assessment demonstrated good or excellent reliability for digital measurement and can be recommended for routine clinical and academic use. Preoperative assessments and coronal measures may be more reliable than postoperative and sagittal measurements. The reliability of digital measurements will be increasingly important as digital radiographic viewing becomes commonplace.

  4. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  5. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  6. Software Construction and Analysis Tools for Future Space Missions

    NASA Technical Reports Server (NTRS)

    Lowry, Michael R.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    NASA and its international partners will increasingly depend on software-based systems to implement advanced functions for future space missions, such as Martian rovers that autonomously navigate long distances exploring geographic features formed by surface water early in the planet's history. The software-based functions for these missions will need to be robust and highly reliable, raising significant challenges in the context of recent Mars mission failures attributed to software faults. After reviewing these challenges, this paper describes tools that have been developed at NASA Ames that could contribute to meeting these challenges; 1) Program synthesis tools based on automated inference that generate documentation for manual review and annotations for automated certification. 2) Model-checking tools for concurrent object-oriented software that achieve memorability through synergy with program abstraction and static analysis tools.

  7. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  8. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    Prouty, Dale A.; Klahr, Philip

    1988-01-01

    A workstation is being developed that provides a computational environment for all NASA engineers across application boundaries, which automates reuse of existing NASA software and designs, and efficiently and effectively allows new programs and/or designs to be developed, catalogued, and reused. The generic workstation is made domain specific by specialization of the user interface, capturing engineering design expertise for the domain, and by constructing/using a library of pertinent information. The incorporation of software reusability principles and expert system technology into this workstation provide the obvious benefits of increased productivity, improved software use and design reliability, and enhanced engineering quality by bringing engineering to higher levels of abstraction based on a well tested and classified library.

  9. A novel method to estimate the volume of bone defects using cone-beam computed tomography: an in vitro study.

    PubMed

    Esposito, Stefano Andrea; Huybrechts, Bart; Slagmolen, Pieter; Cotti, Elisabetta; Coucke, Wim; Pauwels, Ruben; Lambrechts, Paul; Jacobs, Reinhilde

    2013-09-01

    The routine use of high-resolution images derived from 3-dimensional cone-beam computed tomography (CBCT) datasets enables the linear measurement of lesions in the maxillary and mandibular bones on 3 planes of space. Measurements on different planes would make it possible to obtain real volumetric assessments. In this study, we tested, in vitro, the accuracy and reliability of new dedicated software developed for volumetric lesion assessment in clinical endodontics. Twenty-seven bone defects were created around the apices of 8 teeth in 1 young bovine mandible to simulate periapical lesions of different sizes and shapes. The volume of each defect was determined by taking an impression of the defect using a silicone material. The samples were scanned using an Accuitomo 170 CBCT (J. Morita Mfg Co, Kyoto, Japan), and the data were uploaded into a newly developed dedicated software tool. Two endodontists acted as independent and calibrated observers. They analyzed each bone defect for volume. The difference between the direct volumetric measurements and the measurements obtained with the CBCT images was statistically assessed using a lack-of-fit test. A correlation study was undertaken using the Pearson product-moment correlation coefficient. Intra- and interobserver agreement was also evaluated. The results showed a good fit and strong correlation between both volume measurements (ρ > 0.9) with excellent inter- and intraobserver agreement. Using this software, CBCT proved to be a reliable method in vitro for the estimation of endodontic lesion volumes in bovine jaws. Therefore, it may constitute a new, validated technique for the accurate evaluation and follow-up of apical periodontitis. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. An optimized video system for augmented reality in endodontics: a feasibility study.

    PubMed

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  11. The effect of Cardiac Arrhythmias Simulation Software on the nurses' learning and professional development.

    PubMed

    Bazrafkan, Leila; Hemmati, Mehdi

    2018-04-01

    One of the important tasks of nurses in intensive care unit is interpretation of ECG. The use of training simulator is a new paradigm in the age of computers. This study was performed to evaluate the impact of cardiac arrhythmias simulator software on nurses' learning in the subspecialty Vali-Asr Hospital in 2016. This study was conducted by quasi-experimental randomized Salomon four group design with the participation of 120 nurses in subspecialty Vali-Asr Hospital in Tehran, Iran in 2016 that were selected purposefully and allocated in 4 groups. By this design other confounding factors such as the prior information, maturation and the role of sex and age were controlled by Solomon 4 design. The valid and reliable multiple choice test tools were used to gather information; the validity of the test was approved by experts and its reliability was obtained by Cronbach's alpha coefficient 0.89. At first, the knowledge and skills of the participants were assessed by a pre-test; following the educational intervention with cardiac arrhythmias simulator software during 14 days in ICUs, the mentioned factors were measured for the two groups again by a post-test in the four groups. Data were analyzed using the two way ANOVA. The significance level was considered as p<0.05. Based on randomized four-group Solomon designs and our test results, using cardiac arrhythmias simulator software as an intervention was effective in the nurses' learning since a significant difference was found between pre-test and post-test in the first group (p<0.05). Also, other comparisons by ANOVA test showed that there was no interaction between pre-test and intervention in all of the three knowledge areas of cardiac arrhythmias, their treatments and their diagnosis (P>0.05). The use of software-based simulator for cardiac arrhythmias was effective in nurses' learning in light of its attractive components and interactive method. This intervention increased the knowledge of the nurses in cognitive domain of cardiac arrhythmias in addition to their diagnosis and treatment. Also, the package can be used for training in other areas such as continuing medical education.

  12. Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency.

    PubMed

    Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki

    2017-12-01

    The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.

  13. Aerospace Safety Advisory Panel

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of the Panel's activities are presented in a set of findings and recommendations. Highlighted here are both improvements in NASA's safety and reliability activities and specific areas where additional gains might be realized. One area of particular concern involves the curtailment or elimination of Space Shuttle safety and reliability enhancements. Several findings and recommendations address this area of concern, reflecting the opinion that safety and reliability enhancements are essential to the continued successful operation of the Space Shuttle. It is recommended that a comprehensive and continuing program of safety and reliability improvements in all areas of Space Shuttle hardware/software be considered an inherent component of ongoing Space Shuttle operations.

  14. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  15. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  16. Three-Dimensional (3D) Nanometrology Based on Scanning Electron Microscope (SEM) Stereophotogrammetry.

    PubMed

    Tondare, Vipin N; Villarrubia, John S; Vlada R, András E

    2017-10-01

    Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.

  17. Development of preoperative planning software for transforaminal endoscopic surgery and the guidance for clinical applications.

    PubMed

    Chen, Xiaojun; Cheng, Jun; Gu, Xin; Sun, Yi; Politis, Constantinus

    2016-04-01

    Preoperative planning is of great importance for transforaminal endoscopic techniques applied in percutaneous endoscopic lumbar discectomy. In this study, a modular preoperative planning software for transforaminal endoscopic surgery was developed and demonstrated. The path searching method is based on collision detection, and the oriented bounding box was constructed for the anatomical models. Then, image reformatting algorithms were developed for multiplanar reconstruction which provides detailed anatomical information surrounding the virtual planned path. Finally, multithread technique was implemented to realize the steady-state condition of the software. A preoperative planning software for transforaminal endoscopic surgery (TE-Guider) was developed; seven cases of patients with symptomatic lumbar disc herniations were planned preoperatively using TE-Guider. The distances to the midlines and the direction of the optimal paths were exported, and each result was in line with the empirical value. TE-Guider provides an efficient and cost-effective way to search the ideal path and entry point for the puncture. However, more clinical cases will be conducted to demonstrate its feasibility and reliability.

  18. Numerical aerodynamic simulation facility. Preliminary study extension

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.

  19. Software Voting in Asynchronous NMR (N-Modular Redundancy) Computer Structures.

    DTIC Science & Technology

    1983-05-06

    added reliability is exchanged for increased system cost and decreased throughput. Some applications require extremely reliable systems, so the only...not the other way around. Although no systems proidc abstract voting yet. as more applications are written for NMR systems, the programmers are going...throughput goes down, the overhead goes up. Mathematically : Overhead= Non redundant Throughput- Actual Throughput (1) In this section, the actual throughput

  20. Integrating High-Reliability Principles to Transform Access and Throughput by Creating a Centralized Operations Center.

    PubMed

    Davenport, Paul B; Carter, Kimberly F; Echternach, Jeffrey M; Tuck, Christopher R

    2018-02-01

    High-reliability organizations (HROs) demonstrate unique and consistent characteristics, including operational sensitivity and control, situational awareness, hyperacute use of technology and data, and actionable process transformation. System complexity and reliance on information-based processes challenge healthcare organizations to replicate HRO processes. This article describes a healthcare organization's 3-year journey to achieve key HRO features to deliver high-quality, patient-centric care via an operations center powered by the principles of high-reliability data and software to impact patient throughput and flow.

  1. Care 3, Phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Stiffler, J. J.; Bryant, L. A.; Guccione, L.

    1979-01-01

    A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.

  2. Toward Reliable and Energy Efficient Wireless Sensing for Space and Extreme Environments

    NASA Technical Reports Server (NTRS)

    Choi, Baek-Young; Boyd, Darren; Wilkerson, DeLisa

    2017-01-01

    Reliability is the critical challenge of wireless sensing in space systems operating in extreme environments. Energy efficiency is another concern for battery powered wireless sensors. Considering the physics of wireless communications, we propose an approach called Software-Defined Wireless Communications (SDC) that dynamically decide a reliable channel(s) avoiding unnecessary redundancy of channels, out of multiple distinct electromagnetic frequency bands such as radio and infrared frequencies.We validate the concept with Android and Raspberry Pi sensors and pseudo extreme experiments. SDC can be utilized in many areas beyond space applications.

  3. Overview of Probabilistic Methods for SAE G-11 Meeting for Reliability and Uncertainty Quantification for DoD TACOM Initiative with SAE G-11 Division

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting during October 6-8 at the Best Western Sterling Inn, Sterling Heights (Detroit), Michigan is co-sponsored by US Army Tank-automotive & Armaments Command (TACOM). The meeting will provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11's Probabilistic Methods Committee is to "enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development."

  4. Software Reuse Within the Earth Science Community

    NASA Technical Reports Server (NTRS)

    Marshall, James J.; Olding, Steve; Wolfe, Robert E.; Delnore, Victor E.

    2006-01-01

    Scientific missions in the Earth sciences frequently require cost-effective, highly reliable, and easy-to-use software, which can be a challenge for software developers to provide. The NASA Earth Science Enterprise (ESE) spends a significant amount of resources developing software components and other software development artifacts that may also be of value if reused in other projects requiring similar functionality. In general, software reuse is often defined as utilizing existing software artifacts. Software reuse can improve productivity and quality while decreasing the cost of software development, as documented by case studies in the literature. Since large software systems are often the results of the integration of many smaller and sometimes reusable components, ensuring reusability of such software components becomes a necessity. Indeed, designing software components with reusability as a requirement can increase the software reuse potential within a community such as the NASA ESE community. The NASA Earth Science Data Systems (ESDS) Software Reuse Working Group is chartered to oversee the development of a process that will maximize the reuse potential of existing software components while recommending strategies for maximizing the reusability potential of yet-to-be-designed components. As part of this work, two surveys of the Earth science community were conducted. The first was performed in 2004 and distributed among government employees and contractors. A follow-up survey was performed in 2005 and distributed among a wider community, to include members of industry and academia. The surveys were designed to collect information on subjects such as the current software reuse practices of Earth science software developers, why they choose to reuse software, and what perceived barriers prevent them from reusing software. In this paper, we compare the results of these surveys, summarize the observed trends, and discuss the findings. The results are very similar, with the second, larger survey confirming the basic results of the first, smaller survey. The results suggest that reuse of ESE software can drive down the cost and time of system development, increase flexibility and responsiveness of these systems to new technologies and requirements, and increase effective and accountable community participation.

  5. Reliability Testing Using the Vehicle Durability Simulator

    DTIC Science & Technology

    2017-11-20

    remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...4516. 3. TOP 02-2-505 Inspection and Preliminary Operation of Vehicles, 4 February 1987. 4. Multi-Shaker Test and Control : Design , Test, and...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 20-11-2017 2. REPORT

  6. CrossTalk: The Journal of Defense Software Engineering. Volume 22, Number 6, September/October 2009

    DTIC Science & Technology

    2009-10-01

    software to improve the reliability, sustainability, and responsiveness of our warfighting capability. Subscriptions: Send correspondence concerning...endorsed by, the U.S. government, the DoD, the co-sponsors, or the STSC.All product names referenced in this issue are trademarks of their companies...Authors will use this section to share evidence of a demonstrative return on investment, process improvement , quality improvement , reductions to schedule

  7. Wireless Orbiter Hang-Angle Inclinometer System

    NASA Technical Reports Server (NTRS)

    Lucena, Angel; Perotti, Jose; Green, Eric; Byon, Jonathan; Burns, Bradley; Mata, Carlos; Randazzo, John; Blalock, Norman

    2011-01-01

    A document describes a system to reliably gather the hang-angle inclination of the orbiter. The system comprises a wireless handheld master station (which contains the main station software) and a wireless remote station (which contains the inclinometer sensors, the RF transceivers, and the remote station software). The remote station is designed to provide redundancy to the system. It includes two RF transceivers, two power-management boards, and four inclinometer sensors.

  8. Talking Back: Weapons, Warfare, and Feedback

    DTIC Science & Technology

    2010-04-01

    realize that these laws are not laws of physics . They don’t allow for performance or effectiveness comparisons either as they don’t have a common...the weapon’s next software update. Software updates are done by physical connections like most legacy systems as well as by secure data link...Generally the land based Air Force squadrons use physical connections due to the increased reliability, while sea based squadrons use the wireless

  9. Automated interpretation of home blood pressure assessment (Hy-Result software) versus physician's assessment: a validation study.

    PubMed

    Postel-Vinay, Nicolas; Bobrie, Guillaume; Ruelland, Alan; Oufkir, Majida; Savard, Sebastien; Persu, Alexandre; Katsahian, Sandrine; Plouin, Pierre F

    2016-04-01

    Hy-Result is the first software for self-interpretation of home blood pressure measurement results, taking into account both the recommended thresholds for normal values and patient characteristics. We compare the software-generated classification with the physician's evaluation. The primary assessment criterion was whether algorithm classification of the blood pressure (BP) status concurred with the physician's advice (blinded to the software's results) following a consultation (n=195 patients). Secondary assessment was the reliability of text messages. In the 58 untreated patients, the agreement between classification of the BP status generated by the software and the physician's classification was 87.9%. In the 137 treated patients, the agreement was 91.9%. The κ-test applied for all the patients was 0.81 (95% confidence interval: 0.73-0.89). After correction of errors identified in the algorithm during the study, agreement increased to 95.4% [κ=0.9 (95% confidence interval: 0.84-0.97)]. For 100% of the patients with comorbidities (n=46), specific text messages were generated, indicating that a physician might recommend a target BP lower than 135/85 mmHg. Specific text messages were also generated for 100% of the patients for whom global cardiovascular risks markedly exceeded norms. Classification by Hy-Result is at least as accurate as that of a specialist in current practice (http://www.hy-result.com).

  10. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  11. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  12. CCSDS File Delivery Protocol (CFDP): Why it's Useful and How it Works

    NASA Technical Reports Server (NTRS)

    Ray, Tim

    2003-01-01

    Reliable delivery of data products is often required across space links. For example, a NASA mission will require reliable delivery of images produced by an on-board detector. Many missions have their own (unique) way of accomplishing this, requiring custom software. Many missions also require manual operations (e.g. the telemetry receiver software keeps track of what data is missing, and a person manually inputs the appropriate commands to request retransmissions). The Consultative Committee for Space Data Systems (CCSDS) developed the CCSDS File Delivery Protocol (CFDP) specifically for this situation. CFDP is an international standard communication protocol that provides reliable delivery of data products. It is designed for use across space links. It will work well if run over the widely used CCSDS Telemetry and Telecommand protocols. However, it can be run over any protocol, and will work well as long as the underlying protocol delivers a reasonable portion of the data. The CFDP receiver will autonomously determine what data is missing, and request retransmissions as needed. The CFDP sender will autonomously perform the requested transmissions. When the entire data product is delivered, the CFDP receiver will let the CFDP sender know that the transaction has completed successfully. The result is that custom software becomes standard, and manual operations become autonomous. This paper will consider various ways of achieving reliable file delivery, explain why CFDP is the optimal choice for use over space links, explain how the core protocol works, and give some guidance on how to best utilize CFDP within various mission scenarios. It will also touch on additional features of CFDP, as well as other uses for CFDP (e.g. the loading of on-board memory and tables).

  13. Intra- and interobserver reliability of quantitative ultrasound measurement of the plantar fascia.

    PubMed

    Rathleff, Michael Skovdal; Moelgaard, Carsten; Lykkegaard Olesen, Jens

    2011-01-01

    To determine intra- and interobserver reliability and measurement precision of sonographic assessment of plantar fascia thickness when using one, the mean of two, or the mean of three measurements. Two experienced observers scanned 20 healthy subjects twice with 60 minutes between test and retest. A GE LOGIQe ultrasound scanner was used in the study. The built-in software in the scanner was used to measure the thickness of the plantar fascia (PF). Reliability was calculated using intraclass correlation coefficient (ICC) and limits of agreement (LOA). Intraobserver reliability (ICC) using one measurement was 0.50 for one observer and 0.52 for the other, and using the mean of three measurements intraobserver reliability increased up to 0.77 and 0.67, respectively. Interobserver reliability (ICC) when using one measurement was 0.62 and increased to 0.82 when using the average of three measurements. LOA showed that when using the average of three measurements, LOA decreased to 0.6 mm, corresponding to 17.5% of the mean thickness of the PF. The results showed that reliability increases when using the mean of three measurements compared with one. Limits of agreement based on intratester reliability shows that changes in thickness that are larger than 0.6 mm can be considered actual changes in thickness and not a result of measurement error. Copyright © 2011 Wiley Periodicals, Inc.

  14. Software engineering with application-specific languages

    NASA Technical Reports Server (NTRS)

    Campbell, David J.; Barker, Linda; Mitchell, Deborah; Pollack, Robert H.

    1993-01-01

    Application-Specific Languages (ASL's) are small, special-purpose languages that are targeted to solve a specific class of problems. Using ASL's on software development projects can provide considerable cost savings, reduce risk, and enhance quality and reliability. ASL's provide a platform for reuse within a project or across many projects and enable less-experienced programmers to tap into the expertise of application-area experts. ASL's have been used on several software development projects for the Space Shuttle Program. On these projects, the use of ASL's resulted in considerable cost savings over conventional development techniques. Two of these projects are described.

  15. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won; Biesiadecki, Jeffrey; Ali, Khaled

    2008-01-01

    Visual target tracking (VTT) software has been incorporated into Release 9.2 of the Mars Exploration Rover (MER) flight software, now running aboard the rovers Spirit and Opportunity. In the VTT operation (see figure), the rover is driven in short steps between stops and, at each stop, still images are acquired by actively aimed navigation cameras (navcams) on a mast on the rover (see artistic rendition). The VTT software processes the digitized navcam images so as to track a target reliably and to make it possible to approach the target accurately to within a few centimeters over a 10-m traverse.

  16. Dependability modeling and assessment in UML-based software development.

    PubMed

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  17. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  18. Quality Attributes for Mission Flight Software: A Reference for Architects

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan; Fesq, Lorraine; Dvorak, Dan

    2016-01-01

    In the international standards for architecture descriptions in systems and software engineering (ISO/IEC/IEEE 42010), "concern" is a primary concept that often manifests itself in relation to the quality attributes or "ilities" that a system is expected to exhibit - qualities such as reliability, security and modifiability. One of the main uses of an architecture description is to serve as a basis for analyzing how well the architecture achieves its quality attributes, and that requires architects to be as precise as possible about what they mean in claiming, for example, that an architecture supports "modifiability." This paper describes a table, generated by NASA's Software Architecture Review Board, which lists fourteen key quality attributes, identifies different important aspects of each quality attribute and considers each aspect in terms of requirements, rationale, evidence, and tactics to achieve the aspect. This quality attribute table is intended to serve as a guide to software architects, software developers, and software architecture reviewers in the domain of mission-critical real-time embedded systems, such as space mission flight software.

  19. Object oriented development of engineering software using CLIPS

    NASA Technical Reports Server (NTRS)

    Yoon, C. John

    1991-01-01

    Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.

  20. A research review of quality assessment for software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.

  1. Dependability Modeling and Assessment in UML-Based Software Development

    PubMed Central

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C.

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428

  2. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  3. Validity of the Microsoft Kinect for measurement of neck angle: comparison with electrogoniometry.

    PubMed

    Allahyari, Teimour; Sahraneshin Samani, Ali; Khalkhali, Hamid-Reza

    2017-12-01

    Considering the importance of evaluating working postures, many techniques and tools have been developed to identify and eliminate awkward postures and prevent musculoskeletal disorders (MSDs). The introduction of the Microsoft Kinect sensor, which is a low-cost, easy to set up and markerless motion capture system, offers promising possibilities for postural studies. Considering the Kinect's special ability in head-pose and facial-expression tracking and complexity of cervical spine movements, this study aimed to assess concurrent validity of the Microsoft Kinect against an electrogoniometer for neck angle measurements. A special software program was developed to calculate the neck angle based on Kinect skeleton tracking data. Neck angles were measured simultaneously by electrogoniometer and the developed software program in 10 volunteers. The results were recorded in degrees and the time required for each method was also measured. The Kinect's ability to identify body joints was reliable and precise. There was moderate to excellent agreement between the Kinect-based method and the electrogoniometer (paired-sample t test, p ≥ 0.25; intraclass correlation for test-retest reliability, ≥0.75). Kinect-based measurement was much faster and required less equipment, but accurate measurement with Microsoft Kinect was only possible if the participant was in its field of view.

  4. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  5. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  6. Reliability and criterion validity of two applications of the iPhone™ to measure cervical range of motion in healthy participants

    PubMed Central

    2013-01-01

    Summary of background data Recent smartphones, such as the iPhone, are often equipped with an accelerometer and magnetometer, which, through software applications, can perform various inclinometric functions. Although these applications are intended for recreational use, they have the potential to measure and quantify range of motion. The purpose of this study was to estimate the intra and inter-rater reliability as well as the criterion validity of the clinometer and compass applications of the iPhone in the assessment cervical range of motion in healthy participants. Methods The sample consisted of 28 healthy participants. Two examiners measured cervical range of motion of each participant twice using the iPhone (for the estimation of intra and inter-reliability) and once with the CROM (for the estimation of criterion validity). Estimates of reliability and validity were then established using the intraclass correlation coefficient (ICC). Results We observed a moderate intra-rater reliability for each movement (ICC = 0.65-0.85) but a poor inter-rater reliability (ICC < 0.60). For the criterion validity, the ICCs are moderate (>0.50) to good (>0.65) for movements of flexion, extension, lateral flexions and right rotation, but poor (<0.50) for the movement left rotation. Conclusion We found good intra-rater reliability and lower inter-rater reliability. When compared to the gold standard, these applications showed moderate to good validity. However, before using the iPhone as an outcome measure in clinical settings, studies should be done on patients presenting with cervical problems. PMID:23829201

  7. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  8. Reliable data storage system design and implementation for acoustic logging while drilling

    NASA Astrophysics Data System (ADS)

    Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong

    2016-12-01

    Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.

  9. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  10. MASH Suite: a user-friendly and versatile software interface for high-resolution mass spectrometry data interpretation and visualization.

    PubMed

    Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying

    2014-03-01

    The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.

  11. Distributed controller clustering in software defined networks.

    PubMed

    Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond

    2017-01-01

    Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  12. Development and Validation of a Portable and Inexpensive Tool to Measure the Drop Vertical Jump Using the Microsoft Kinect V2.

    PubMed

    Gray, Aaron D; Willis, Brad W; Skubic, Marjorie; Huo, Zhiyu; Razu, Swithin; Sherman, Seth L; Guess, Trent M; Jahandar, Amirhossein; Gulbrandsen, Trevor R; Miller, Scott; Siesener, Nathan J

    Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a "gold standard" Vicon motion analysis system. Descriptive laboratory study. Level 2. Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.

  13. Performance of Different Analytical Software Packages in Quantification of DNA Methylation by Pyrosequencing.

    PubMed

    Grasso, Chiara; Trevisan, Morena; Fiano, Valentina; Tarallo, Valentina; De Marco, Laura; Sacerdote, Carlotta; Richiardi, Lorenzo; Merletti, Franco; Gillio-Tos, Anna

    2016-01-01

    Pyrosequencing has emerged as an alternative method of nucleic acid sequencing, well suited for many applications which aim to characterize single nucleotide polymorphisms, mutations, microbial types and CpG methylation in the target DNA. The commercially available pyrosequencing systems can harbor two different types of software which allow analysis in AQ or CpG mode, respectively, both widely employed for DNA methylation analysis. Aim of the study was to assess the performance for DNA methylation analysis at CpG sites of the two pyrosequencing software which allow analysis in AQ or CpG mode, respectively. Despite CpG mode having been specifically generated for CpG methylation quantification, many investigations on this topic have been carried out with AQ mode. As proof of equivalent performance of the two software for this type of analysis is not available, the focus of this paper was to evaluate if the two modes currently used for CpG methylation assessment by pyrosequencing may give overlapping results. We compared the performance of the two software in quantifying DNA methylation in the promoter of selected genes (GSTP1, MGMT, LINE-1) by testing two case series which include DNA from paraffin embedded prostate cancer tissues (PC study, N = 36) and DNA from blood fractions of healthy people (DD study, N = 28), respectively. We found discrepancy in the two pyrosequencing software-based quality assignment of DNA methylation assays. Compared to the software for analysis in the AQ mode, less permissive criteria are supported by the Pyro Q-CpG software, which enables analysis in CpG mode. CpG mode warns the operators about potential unsatisfactory performance of the assay and ensures a more accurate quantitative evaluation of DNA methylation at CpG sites. The implementation of CpG mode is strongly advisable in order to improve the reliability of the methylation analysis results achievable by pyrosequencing.

  14. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  15. Development of a methodology for assessing the safety of embedded software systems

    NASA Technical Reports Server (NTRS)

    Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.

    1993-01-01

    A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.

  16. HiRel - Reliability/availability integrated workstation tool

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Dugan, Joanne B.

    1992-01-01

    The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.

  17. Design of Chemical Literacy Assessment by Using Model of Educational Reconstruction (MER) on Solubility Topic

    NASA Astrophysics Data System (ADS)

    Yusmaita, E.; Nasra, Edi

    2018-04-01

    This research aims to produce instrument for measuring chemical literacy assessment in basic chemistry courses with solubility topic. The construction of this measuring instrument is adapted to the PISA (Programme for International Student Assessment) problem’s characteristics and the Syllaby of Basic Chemistry in KKNI-IndonesianNational Qualification Framework. The PISA is a cross-country study conducted periodically to monitor the outcomes of learners' achievement in each participating country. So far, studies conducted by PISA include reading literacy, mathematic literacy and scientific literacy. Refered to the scientific competence of the PISA study on science literacy, an assessment designed to measure the chemical literacy of the chemistry department’s students in UNP. The research model used is MER (Model of Educational Reconstruction). The validity and reliability values of discourse questions is measured using the software ANATES. Based on the acquisition of these values is obtained a valid and reliable chemical literacy questions.There are seven question items limited response on the topic of solubility with valid category, the acquisition value of test reliability is 0,86, and has a difficulty index and distinguishing good

  18. Image processing in biodosimetry: A proposal of a generic free software platform.

    PubMed

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  19. Computer science: Key to a space program renaissance. The 1981 NASA/ASEE summer study on the use of computer science and technology in NASA. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)

    1983-01-01

    Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.

  20. Developing an Approach for Analyzing and Verifying System Communication

    NASA Technical Reports Server (NTRS)

    Stratton, William C.; Lindvall, Mikael; Ackermann, Chris; Sibol, Deane E.; Godfrey, Sally

    2009-01-01

    This slide presentation reviews a project for developing an approach for analyzing and verifying the inter system communications. The motivation for the study was that software systems in the aerospace domain are inherently complex, and operate under tight constraints for resources, so that systems of systems must communicate with each other to fulfill the tasks. The systems of systems requires reliable communications. The technical approach was to develop a system, DynSAVE, that detects communication problems among the systems. The project enhanced the proven Software Architecture Visualization and Evaluation (SAVE) tool to create Dynamic SAVE (DynSAVE). The approach monitors and records low level network traffic, converting low level traffic into meaningful messages, and displays the messages in a way the issues can be detected.

Top