1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
Methodology for Software Reliability Prediction. Volume 1.
1987-11-01
SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Reliability and validity of the AutoCAD software method in lumbar lordosis measurement
Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe
2011-01-01
Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681
Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.
Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe
2011-12-01
The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.
The determination of measures of software reliability
NASA Technical Reports Server (NTRS)
Maxwell, F. D.; Corn, B. C.
1978-01-01
Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1992-01-01
This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane
2017-08-01
The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.
NASA Technical Reports Server (NTRS)
Basili, V. R.
1981-01-01
Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.
Reliability measurement during software development. [for a multisensor tracking system
NASA Technical Reports Server (NTRS)
Hecht, H.; Sturm, W. A.; Trattner, S.
1977-01-01
During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.
Rajyalakshmi, R.; Prakash, Winston D.; Ali, Mohammad Javed; Naik, Milind N.
2017-01-01
Purpose: To assess the reliability and repeatability of periorbital biometric measurements using ImageJ software and to assess if the horizontal visible iris diameter (HVID) serves as a reliable scale for facial measurements. Methods: This study was a prospective, single-blind, comparative study. Two clinicians performed 12 periorbital measurements on 100 standardised face photographs. Each individual’s HVID was determined by Orbscan IIz and used as a scale for measurements using ImageJ software. All measurements were repeated using the ‘average’ HVID of the study population as a measurement scale. Intraclass correlation coefficient (ICC) and Pearson product-moment coefficient were used as statistical tests to analyse the data. Results: The range of ICC for intra- and interobserver variability was 0.79–0.99 and 0.86–0.99, respectively. Test-retest reliability ranged from 0.66–1.0 to 0.77–0.98, respectively. When average HVID of the study population was used as scale, ICC ranged from 0.83 to 0.99, and the test-retest reliability ranged from 0.83 to 0.96 and the measurements correlated well with recordings done with individual Orbscan HVID measurements. Conclusion: Periorbital biometric measurements using ImageJ software are reproducible and repeatable. Average HVID of the population as measured by Orbscan is a reliable scale for facial measurements. PMID:29403183
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Eijgenraam, Susanne M; Boselie, Toon F M; Sieben, Judith M; Bastiaenen, Caroline H G; Willems, Paul C; Arts, Jacobus J; Lataster, Arno
2017-02-01
The amount of vertebral rotation in the axial plane is of key importance in the prognosis and treatment of adolescent idiopathic scoliosis (AIS). Current methods to determine vertebral rotation are either designed for use in analogue plain radiographs and not useful in digital images, or lack measurement precision and are therefore less suitable for the follow-up of rotation in AIS patients. This study aimed to develop a digital X-ray software tool with high measurement precision to determine vertebral rotation in AIS, and to assess its (concurrent) validity and reliability. In this study a combination of basic science and reliability methodology applied in both laboratory and clinical settings was used. Software was developed using the algorithm of the Perdriolle torsion meter for analogue AP plain radiographs of the spine. Software was then assessed for (1) concurrent validity and (2) intra- and interobserver reliability. Plain radiographs of both human cadaver vertebrae and outpatient AIS patients were used. Concurrent validity was measured by two independent observers, both experienced in the assessment of plain radiographs. Reliability-measurements were performed by three independent spine surgeons. Pearson correlation of the software compared with the analogue Perdriolle torsion meter for mid-thoracic vertebrae was 0.98, for low-thoracic vertebrae 0.97 and for lumbar vertebrae 0.97. Measurement exactness of the software was within 5° in 62% of cases and within 10° in 97% of cases. Intraclass correlation coefficient (ICC) for inter-observer reliability was 0.92 (0.91-0.95), ICC for intra-observer reliability was 0.96 (0.94-0.97). We developed a digital X-ray software tool to determine vertebral rotation in AIS with a substantial concurrent validity and reliability, which may be useful for the follow-up of vertebral rotation in AIS patients. Copyright © 2015 Elsevier Inc. All rights reserved.
Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam
2016-05-01
People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
On Quality and Measures in Software Engineering
ERIC Educational Resources Information Center
Bucur, Ion I.
2006-01-01
Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…
Aubin, Carl-Eric; Bellefleur, Christian; Joncas, Julie; de Lanauze, Dominic; Kadoury, Samuel; Blanke, Kathy; Parent, Stefan; Labelle, Hubert
2011-05-20
Radiographic software measurement analysis in adult scoliosis. To assess the accuracy as well as the intra- and interobserver reliability of measuring different indices on preoperative adult scoliosis radiographs using a novel measurement software that includes a calibration procedure and semiautomatic features to facilitate the measurement process. Scoliosis requires a careful radiographic evaluation to assess the deformity. Manual and computer radiographic process measures have been studied extensively to determine the reliability and reproducibility in adolescent idiopathic scoliosis. Most studies rely on comparing given measurements, which are repeated by the same user or by an expert user. A given measure with a small intra- or interobserver error might be deemed as good repeatability, but all measurements might not be truly accurate because the ground-truth value is often unknown. Thorough accuracy assessment of radiographic measures is necessary to assess scoliotic deformities, compare these measures at different stages or to permit valid multicenter studies. Thirty-four sets of adult scoliosis digital radiographs were measured two times by three independent observers using a novel radiographic measurement software that includes semiautomatic features to facilitate the measurement process. Twenty different measures taken from the Spinal Deformity Study Group radiographic measurement manual were performed on the coronal and sagittal images. Intra- and intermeasurer reliability for each measure was assessed. The accuracy of the measurement software was also assessed using a physical spine model in six different scoliotic configurations as a true reference. The majority of the measures demonstrated good to excellent intra- and intermeasurer reliability, except for sacral obliquity. The standard variation of all the measures was very small: ≤ 4.2° for Cobb angles, ≤ 4.2° for the kyphosis, ≤ 5.7° for the lordosis, ≤ 3.9° for the pelvic angles, and ≤5.3° for the sacral angles. The variability in the linear measurements (distances) was <4 mm. The variance of the measures was 1.7 and 2.6 times greater, respectively, for the angular and linear measures between the inter- and intrameasurer reliability. The image quality positively influenced the intermeasurer reliability especially for the proximal thoracic Cobb angle, T10-L2 lordosis, sacral slope and L5 seating. The accuracy study revealed that on average the difference in the angular measures was < 2° for the Cobb angles, and < 4° for the other angles, except T2-T12 kyphosis (5.3°). The linear measures were all <3.5 mm difference on average. The majority of the measures, which were analyzed in this study demonstrated good to excellent reliability and accuracy. The novel semiautomatic measurement software can be recommended for use for clinical, research or multicenter study purposes.
Rahnama, Leila; Rezasoltani, Asghar; Khalkhali-Zavieh, Minoo; Rahnama, Behnam; Noori-Kochi, Farhang
2015-01-01
OBJECTIVES: This study was conducted with the purpose of evaluating the inter-session reliability of new software to measure the diameters of the cervical multifidus muscle (CMM), both at rest and during isometric contractions of the shoulder abductors in subjects with neck pain and in healthy individuals. METHOD: In the present study, the reliability of measuring the diameters of the CMM with the Sonosynch software was evaluated by using 24 participants, including 12 subjects with chronic neck pain and 12 healthy individuals. The anterior-posterior diameter (APD) and the lateral diameter (LD) of the CMM were measured in a resting state and then repeated during isometric contraction of the shoulder abductors. Measurements were taken on separate occasions 3 to 7 days apart in order to determine inter-session reliability. Intraclass correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable difference (SDD) were used to evaluate the relative and absolute reliability, respectively. RESULTS: The Sonosynch software has shown to be highly reliable in measuring the diameters of the CMM both in healthy subjects and in those with neck pain. The ICCs 95% CI for APD ranged from 0.84 to 0.94 in subjects with neck pain and from 0.86 to 0.94 in healthy subjects. For LD, the ICC 95% CI ranged from 0.64 to 0.95 in subjects with neck pain and from 0.82 to 0.92 in healthy subjects. CONCLUSIONS: Ultrasonographic measurement of the diameters of the CMM using Sonosynch has proved to be reliable especially for APD in healthy subjects as well as subjects with neck pain. PMID:26443975
NASA Technical Reports Server (NTRS)
Tamayo, Tak Chai
1987-01-01
Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer
2016-01-01
The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM
2009-01-01
Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582
Khadilkar, Leenesh; MacDermid, Joy C; Sinden, Kathryn E; Jenkyn, Thomas R; Birmingham, Trevor B; Athwal, George S
2014-01-01
Video-based movement analysis software (Dartfish) has potential for clinical applications for understanding shoulder motion if functional measures can be reliably obtained. The primary purpose of this study was to describe the functional range of motion (ROM) of the shoulder used to perform a subset of functional tasks. A second purpose was to assess the reliability of functional ROM measurements obtained by different raters using Dartfish software. Ten healthy participants, mean age 29 ± 5 years, were videotaped while performing five tasks selected from the Disabilities of the Arm, Shoulder and Hand (DASH). Video cameras and markers were used to obtain video images suitable for analysis in Dartfish software. Three repetitions of each task were performed. Shoulder movements from all three repetitions were analyzed using Dartfish software. The tracking tool of the Dartfish software was used to obtain shoulder joint angles and arcs of motion. Test-retest and inter-rater reliability of the measurements were evaluated using intraclass correlation coefficients (ICC). Maximum (coronal plane) abduction (118° ± 16°) and (sagittal plane) flexion (111° ± 15°) was observed during 'washing one's hair;' maximum extension (-68° ± 9°) was identified during 'washing one's own back.' Minimum shoulder ROM was observed during 'opening a tight jar' (33° ± 13° abduction and 13° ± 19° flexion). Test-retest reliability (ICC = 0.45 to 0.94) suggests high inter-individual task variability, and inter-rater reliability (ICC = 0.68 to 1.00) showed moderate to excellent agreement. KEY FINDINGS INCLUDE: 1) functional shoulder ROM identified in this study compared to similar studies; 2) healthy individuals require less than full ROM when performing five common ADL tasks 3) high participant variability was observed during performance of the five ADL tasks; and 4) Dartfish software provides a clinically relevant tool to analyze shoulder function.
Effect of system workload on operating system reliability - A study on IBM 3081
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Rossetti, D. J.
1985-01-01
This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.
Wu, Weifei; Liang, Jie; Du, Yuanli; Tan, Xiaoyi; Xiang, Xuanping; Wang, Wanhong; Ru, Neng; Le, Jinbo
2014-02-06
Although many studies on reliability and reproducibility of measurement have been performed on coronal Cobb angle, few results about reliability and reproducibility are reported on sagittal alignment measurement including the pelvis. We usually use SurgimapSpine software to measure the Cobb angle in our studies; however, there are no reports till date on its reliability and reproducible measurements. Sixty-eight standard standing posteroanterior whole-spine radiographs were reviewed. Three examiners carried out the measurements independently under the settings of manual measurement on X-ray radiographies and SurgimapSpine software on the computer. Parameters measured included pelvic incidence, sacral slope, pelvic tilt, Lumbar lordosis (LL), thoracic kyphosis, and coronal Cobb angle. SPSS 16.0 software was used for statistical analyses. The means, standard deviations, intraclass and interclass correlation coefficient (ICC), and 95% confidence intervals (CI) were calculated. There was no notable difference between the two tools (P = 0.21) for the coronal Cobb angle. In the sagittal plane parameters, the ICC of intraobserver reliability for the manual measures varied from 0.65 (T2-T5 angle) to 0.95 (LL angle). Further, for SurgimapSpine tool, the ICC ranged from 0.75 to 0.98. No significant difference in intraobserver reliability was found between the two measurements (P > 0.05). As for the interobserver reliability, measurements with SurgimapSpine tool had better ICC (0.71 to 0.98 vs 0.59 to 0.96) and Pearson's coefficient (0.76 to 0.99 vs 0.60 to 0.97). The reliability of SurgimapSpine measures was significantly higher in all parameters except for the coronal Cobb angle where the difference was not significant (P > 0.05). Although the differences between the two methods are very small, the results of this study indicate that the SurgimapSpine measurement is an equivalent measuring tool to the traditional manual in coronal Cobb angle, but is advantageous in spino-pelvic measurement in T2-T5, PT, PI, SS, and LL.
2014-01-01
Background Although many studies on reliability and reproducibility of measurement have been performed on coronal Cobb angle, few results about reliability and reproducibility are reported on sagittal alignment measurement including the pelvis. We usually use SurgimapSpine software to measure the Cobb angle in our studies; however, there are no reports till date on its reliability and reproducible measurements. Methods Sixty-eight standard standing posteroanterior whole-spine radiographs were reviewed. Three examiners carried out the measurements independently under the settings of manual measurement on X-ray radiographies and SurgimapSpine software on the computer. Parameters measured included pelvic incidence, sacral slope, pelvic tilt, Lumbar lordosis (LL), thoracic kyphosis, and coronal Cobb angle. SPSS 16.0 software was used for statistical analyses. The means, standard deviations, intraclass and interclass correlation coefficient (ICC), and 95% confidence intervals (CI) were calculated. Results There was no notable difference between the two tools (P = 0.21) for the coronal Cobb angle. In the sagittal plane parameters, the ICC of intraobserver reliability for the manual measures varied from 0.65 (T2–T5 angle) to 0.95 (LL angle). Further, for SurgimapSpine tool, the ICC ranged from 0.75 to 0.98. No significant difference in intraobserver reliability was found between the two measurements (P > 0.05). As for the interobserver reliability, measurements with SurgimapSpine tool had better ICC (0.71 to 0.98 vs 0.59 to 0.96) and Pearson’s coefficient (0.76 to 0.99 vs 0.60 to 0.97). The reliability of SurgimapSpine measures was significantly higher in all parameters except for the coronal Cobb angle where the difference was not significant (P > 0.05). Conclusion Although the differences between the two methods are very small, the results of this study indicate that the SurgimapSpine measurement is an equivalent measuring tool to the traditional manual in coronal Cobb angle, but is advantageous in spino-pelvic measurement in T2-T5, PT, PI, SS, and LL. PMID:24502397
Analysis of key technologies for virtual instruments metrology
NASA Astrophysics Data System (ADS)
Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang
2008-12-01
Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Visconti, Luca; Martin, Conchita
2013-01-01
The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation. PMID:24235889
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
Nasendoscopy: an analysis of measurement uncertainties.
Gilleard, Onur; Sommerlad, Brian; Sell, Debbie; Ghanem, Ali; Birch, Malcolm
2013-05-01
Objective : The purpose of this study was to analyze the optical characteristics of two different nasendoscopes used to assess velopharyngeal insufficiency and to quantify the measurement uncertainties that will occur in a typical set of clinical data. Design : The magnification and barrel distortion associated with nasendoscopy was estimated by using computer software to analyze the apparent dimensions of a spatially calibrated test object at varying object-lens distances. In addition, a method of semiquantitative analysis of velopharyngeal closure using nasendoscopy and computer software is described. To calculate the reliability of this method, 10 nasendoscopy examinations were analyzed two times by three separate operators. The measure of intraoperator and interoperator agreement was evaluated using Pearson's r correlation coefficient. Results : Over an object lens distance of 9 mm, magnification caused the visualized dimensions of the test object to increase by 80%. In addition, dimensions of objects visualized in the far-peripheral field of the nasendoscopic examinations appeared approximately 40% smaller than those visualized in the central field. Using computer software to analyze velopharyngeal closure, the mean correlation coefficient for intrarater reliability was .94 and for interrater reliability was .90. Conclusion : Using a custom-designed apparatus, the effect object-lens distance has on the magnification of nasendoscopic images has been quantified. Barrel distortion has also been quantified and was found to be independent of object-lens distance. Using computer software to analyze clinical images, the intraoperator and interoperator correlation appears to show that ratio-metric measurements are reliable.
Tilleul, Julien; Querques, Giuseppe; Canoui-Poitrine, Florence; Leveziel, Nicolas; Souied, Eric H
2013-01-01
To assess the ability of the Spectralis optical coherence tomography (OCT) segmentation software to identify the inner limiting membrane and Bruch's membrane in exudative age-related macular degeneration (AMD) patients. Thirty-eight eyes of 38 naive exudative AMD patients were retrospectively included. They all had a complete ophthalmologic examination including Spectralis OCT at baseline, at month 1 and 2. Reliability of the segmentation software was assessed by 2 ophthalmologists. Reliability of the segmentation software was defined as good if both inner limiting membrane and Bruch's membrane were correctly drawn. A total of 38 patients charts were reviewed (114 scans). The inner limiting membrane was correctly drawn by the segmentation software in 114/114 spectral domain OCT scans (100%). Conversely, Bruch's membrane was correctly drawn in 59/114 scans (51.8%). The software was less reliable in locating Bruch's membrane in case of pigment epithelium detachment (PED) than without PED (42.5 vs. 73.5%, respectively; p = 0.049), but its reliability was not associated with SRF or CME (p = 0.55 and p = 0.10, respectively). Segmentation of the inner limiting membrane was constantly trustworthy but Bruch's membrane segmentation was poorly reliable using the automatic Spectralis segmentation software. Based on this software, evaluation of retinal thickness may be incorrect, particularly in case of PED. PED is effectively an important parameter which is not included when measuring retinal thickness. Copyright © 2012 S. Karger AG, Basel.
The Validation of a Software Evaluation Instrument.
ERIC Educational Resources Information Center
Schmitt, Dorren Rafael
This study, conducted at six southern universities, analyzed the validity and reliability of a researcher developed instrument designed to evaluate educational software in secondary mathematics. The instrument called the Instrument for Software Evaluation for Educators uses measurement scales, presents a summary section of the evaluation, and…
Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad
2015-01-01
Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.
Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique
2015-09-01
The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
STAMPS: development and verification of swallowing kinematic analysis software.
Lee, Woo Hyung; Chun, Changmook; Seo, Han Gil; Lee, Seung Hak; Oh, Byung-Mo
2017-10-17
Swallowing impairment is a common complication in various geriatric and neurodegenerative diseases. Swallowing kinematic analysis is essential to quantitatively evaluate the swallowing motion of the oropharyngeal structures. This study aims to develop a novel swallowing kinematic analysis software, called spatio-temporal analyzer for motion and physiologic study (STAMPS), and verify its validity and reliability. STAMPS was developed in MATLAB, which is one of the most popular platforms for biomedical analysis. This software was constructed to acquire, process, and analyze the data of swallowing motion. The target of swallowing structures includes bony structures (hyoid bone, mandible, maxilla, and cervical vertebral bodies), cartilages (epiglottis and arytenoid), soft tissues (larynx and upper esophageal sphincter), and food bolus. Numerous functions are available for the spatiotemporal parameters of the swallowing structures. Testing for validity and reliability was performed in 10 dysphagia patients with diverse etiologies and using the instrumental swallowing model which was designed to mimic the motion of the hyoid bone and the epiglottis. The intra- and inter-rater reliability tests showed excellent agreement for displacement and moderate to excellent agreement for velocity. The Pearson correlation coefficients between the measured and instrumental reference values were nearly 1.00 (P < 0.001) for displacement and velocity. The Bland-Altman plots showed good agreement between the measurements and the reference values. STAMPS provides precise and reliable kinematic measurements and multiple practical functionalities for spatiotemporal analysis. The software is expected to be useful for researchers who are interested in the swallowing motion analysis.
Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.
2010-01-01
The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339
Developing of an automation for therapy dosimetry systems by using labview software
NASA Astrophysics Data System (ADS)
Aydin, Selim; Kam, Erol
2018-06-01
Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.
Reliability of a Single Light Source Purkinjemeter in Pseudophakic Eyes.
Janunts, Edgar; Chashchina, Ekaterina; Seitz, Berthold; Schaeffel, Frank; Langenbucher, Achim
2015-08-01
To study the reliability of Purkinje image analysis for assessment of intraocular lens tilt and decentration in pseudophakic eyes. The study comprised 64 eyes of 39 patients. All eyes underwent phacoemulsification with intraocular lens implanted in the capsular bag. Lens decentration and tilt were measured multiple times by an infrared Purkinjemeter. A total of 396 measurements were performed 1 week and 1 month postoperatively. Lens tilt (Tx, Ty) and decentration (Dx, Dy) in horizontal and vertical directions, respectively, were calculated by dedicated software based on regression analysis for each measurement using only four images, and afterward, the data were averaged (mean values, MV) for repeated sequence of measurements. New software was designed by us for recalculating lens misalignment parameters offline, using a complete set of Purkinje images obtained through the repeated measurements (9 to 15 Purkinje images) (recalculated values, MV'). MV and MV' were compared using SPSS statistical software package. MV and MV' were found to be highly correlated for the Tx and Ty parameters (R2 > 0.9; p < 0.001), moderately correlated for the Dx parameter (R2 > 0.7; p < 0.001), and weakly correlated for the Dy parameter (R2 = 0.23; p < 0.05). Reliability was high (Cronbach α > 0.9) for all measured parameters. Standard deviation values were 0.86 ± 0.69 degrees, 0.72 ± 0.65 degrees, 0.04 ± 0.05 mm, and 0.23 ± 0.34 mm for Tx, Ty, Dx, and Dy, respectively. The Purkinjemeter demonstrated high reliability and reproducibility for lens misalignment parameters. To further improve reliability, we recommend capturing at least six Purkinje images instead of three.
Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.
Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam
2015-05-01
Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smidts, Carol; Huang, Funqun; Li, Boyuan
With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Understanding software faults and their role in software reliability modeling
NASA Technical Reports Server (NTRS)
Munson, John C.
1994-01-01
This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.
Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K
2011-04-01
Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual measurements, while interobserver APAs ranged from 91% to 96% for QMA versus 57% to 63% for digitized manual measurements. The use of QMA software substantially improved the reliability of lumbar intervertebral measurements and the classification of instability based on flexion-extension radiographs.
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
Identification and evaluation of software measures
NASA Technical Reports Server (NTRS)
Card, D. N.
1981-01-01
A large scale, systematic procedure for identifying and evaluating measures that meaningfully characterize one or more elements of software development is described. The background of this research, the nature of the data involved, and the steps of the analytic procedure are discussed. An example of the application of this procedure to data from real software development projects is presented. As the term is used here, a measure is a count or numerical rating of the occurrence of some property. Examples of measures include lines of code, number of computer runs, person hours expended, and degree of use of top down design methodology. Measures appeal to the researcher and the manager as a potential means of defining, explaining, and predicting software development qualities, especially productivity and reliability.
2014-01-01
Background A balance test provides important information such as the standard to judge an individual’s functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Methods Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). Results The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. Conclusion The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment. PMID:24912769
Park, Dae-Sung; Lee, GyuChang
2014-06-10
A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.
Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J
1997-01-01
Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
FloWave.US: validated, open-source, and flexible software for ultrasound blood flow analysis.
Coolbaugh, Crystal L; Bush, Emily C; Caskey, Charles F; Damon, Bruce M; Towse, Theodore F
2016-10-01
Automated software improves the accuracy and reliability of blood velocity, vessel diameter, blood flow, and shear rate ultrasound measurements, but existing software offers limited flexibility to customize and validate analyses. We developed FloWave.US-open-source software to automate ultrasound blood flow analysis-and demonstrated the validity of its blood velocity (aggregate relative error, 4.32%) and vessel diameter (0.31%) measures with a skeletal muscle ultrasound flow phantom. Compared with a commercial, manual analysis software program, FloWave.US produced equivalent in vivo cardiac cycle time-averaged mean (TAMean) velocities at rest and following a 10-s muscle contraction (mean bias <1 pixel for both conditions). Automated analysis of ultrasound blood flow data was 9.8 times faster than the manual method. Finally, a case study of a lower extremity muscle contraction experiment highlighted the ability of FloWave.US to measure small fluctuations in TAMean velocity, vessel diameter, and mean blood flow at specific time points in the cardiac cycle. In summary, the collective features of our newly designed software-accuracy, reliability, reduced processing time, cost-effectiveness, and flexibility-offer advantages over existing proprietary options. Further, public distribution of FloWave.US allows researchers to easily access and customize code to adapt ultrasound blood flow analysis to a variety of vascular physiology applications. Copyright © 2016 the American Physiological Society.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Modification Site Localization in Peptides.
Chalkley, Robert J
2016-01-01
There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.
Reliability analysis for digital adolescent idiopathic scoliosis measurements.
Kuklo, Timothy R; Potter, Benjamin K; O'Brien, Michael F; Schroeder, Teresa M; Lenke, Lawrence G; Polly, David W
2005-04-01
Analysis of adolescent idiopathic scoliosis (AIS) requires a thorough clinical and radiographic evaluation to completely assess the three-dimensional deformity. Recently, these radiographic parameters have been analyzed for reliability and reproducibility following manual measurements; however, most of these parameters have not been analyzed with regard to digital measurements. The purpose of this study is to determine the intra- and interobserver reliability of common scoliosis radiographic parameters using a digital software measurement program. Thirty sets of preoperative (posteroanterior [PA], lateral, and side-bending [SB]) and postoperative (PA and lateral) radiographs were analyzed by three independent observers on two separate occasions using a software measurement program (PhDx, Albuquerque, NM). Coronal measures included main thoracic (MT) and thoracolumbar-lumbar (TL/L) Cobb, SB MT Cobb, MT and TL/L apical vertical translation (AVT), C7 to center sacral vertical line (CSVL), T1 tilt, LIV tilt, disk below lowest instrumented vertebra (LIV), coronal balance, and Risser, whereas sagittal measures included T2-T5, T5-T12, T2-T12, T10-L2, T12-S1, and sagittal balance. Analysis of variance for repeated measures or Cohen three-way kappa correlation coefficient analysis was performed as appropriate to calculate the intra- and interobserver reliability for each parameter. The majority of the radiographic parameters assessed demonstrated good or excellent intra- and interobserver reliability. The relationship of the LIV to the CSVL (intraobserver kappaa = 0.48-0.78, fair to excellent; interobserver kappaa = 0.34-0.41, fair to poor), interobserver measurement of AVT (rho = 0.49-0.73, low to good), Risser grade (intraobserver rho = 0.41-0.97, low to excellent; interobserver rho = 0.60-0.70, fair to good), intraobserver measurement of the angulation of the disk inferior to the LIV (rho = 0.53-0.88, fair to good), apical Nash-Moe vertebral rotation (intraobserver rho = 0.50-0.85, fair to good; interobserver rho = 0.53-0.59, fair), and especially regional thoracic kyphosis from T2 to T5 (intraobserver rho = 0.22-0.65, poor to fair; interobserver rho = 0.33-0.47, low) demonstrated lesser reliability. In general, preoperative measures demonstrated greater reliability than postoperative measures, and coronal angular measures were more reliable than sagittal measures. Most common radiographic parameters for AIS assessment demonstrated good or excellent reliability for digital measurement and can be recommended for routine clinical and academic use. Preoperative assessments and coronal measures may be more reliable than postoperative and sagittal measurements. The reliability of digital measurements will be increasingly important as digital radiographic viewing becomes commonplace.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Endoscopic Stone Measurement During Ureteroscopy.
Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R
2018-01-01
Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.
Comparison of two methods for cardiac output measurement in critically ill patients.
Saraceni, E; Rossi, S; Persona, P; Dan, M; Rizzi, S; Meroni, M; Ori, C
2011-05-01
The aim of recent haemodynamic monitoring has been to obtain continuous and reliable measures of cardiac output (CO) and indices of preload responsiveness. Many of these methods are based on the arterial pressure waveform analysis. The aim of our study was to assess the accuracy of CO measurements obtained by FloTrac/Vigileo, software version 1.07 and the new version 1.10 (Edwards Lifesciences LLC, Irvine, CA, USA), compared with CO measurements obtained by bolus thermodilution by pulmonary artery catheterization (PAC) in the intensive care setting. In 21 critically ill patients (enrolled in two University Hospitals), requiring invasive haemodynamic monitoring, PAC and FloTrac/Vigileo transducers connected to the arterial pressure line were placed. Simultaneous measurements of CO by two methods (FloTrac/Vigileo and thermodilution) were obtained three times a day for 3 consecutive days, when possible. The level of concordance between the two methods was assessed by the procedure suggested by Bland and Altman. One hundred and forty-one pairs of measurements (provided by thermodilution and by both 1.07 and 1.10 FloTrac/Vigileo versions) were obtained in 21 patients (seven of them were trauma patients) with a mean (sd) age of 59 (16) yr. The Pearson product moment coefficient was 0.62 (P<0.001). The bias was -0.18 litre min(-1). The limits of agreement were 4.54 and -4.90 litre min(-1), respectively. Our data show a poor level of concordance between measures provided by the two methods. We found an underestimation of CO values measured with the 1.07 software version of FloTrac for supranormal values of CO. The new software (1.10) has been improved in order to correct this bias; however, its reliability is still poor. On the basis of our data, we can therefore conclude that both software versions of FloTrac/Vigileo did not still provide reliable estimation of CO in our intensive care unit setting.
Reliability of laboratory measurement of human food intake.
Laessle, R; Geiermann, L
2012-02-01
The universal eating monitor (UEM) of Kissileff for laboratory measurement of food intake was modified and used with a newly developed special software to compute cumulative intake data. To explore the measurement precision of the UEM an investigation of test-retest-reliability of food intake parameters was conducted. The intake characteristics of 125 males and females were measured repeatedly in the laboratory with a measurement interval of 1 week. Pudding of preferred flavour served as test meal. Test-retest-reliability of intake characteristics ranged from .49 (change of eating rate) to .89 (initial eating rate). All test-retest correlations were highly significant. Sex, BMI and eating habits according to TFEQ-factors had no significant effects on reliability of intake characteristics. The test-retest-reliability of the laboratory intake measures is as good as those of personality questionnaires, where it should be better than .80. Reliability coefficients are valid independent of sex, BMI or trait characteristics of eating behaviour. Copyright © 2011 Elsevier Ltd. All rights reserved.
A methodology for producing reliable software, volume 1
NASA Technical Reports Server (NTRS)
Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.
1976-01-01
An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.
Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.
Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans
2016-01-01
Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. © 2014 World Institute of Pain.
Reliability improvement methods for sapphire fiber temperature sensors
NASA Astrophysics Data System (ADS)
Schietinger, C.; Adams, B.
1991-08-01
Mechanical, optical, electrical, and software design improvements can be brought to bear in the enhancement of fiber-optic sapphire-fiber temperature measurement tool reliability in harsh environments. The optical fiber thermometry (OFT) equipment discussed is used in numerous process industries and generally involves a sapphire sensor, an optical transmission cable, and a microprocessor-based signal analyzer. OFT technology incorporating sensors for corrosive environments, hybrid sensors, and two-wavelength measurements, are discussed.
Quantitative Measures for Software Independent Verification and Validation
NASA Technical Reports Server (NTRS)
Lee, Alice
1996-01-01
As software is maintained or reused, it undergoes an evolution which tends to increase the overall complexity of the code. To understand the effects of this, we brought in statistics experts and leading researchers in software complexity, reliability, and their interrelationships. These experts' project has resulted in our ability to statistically correlate specific code complexity attributes, in orthogonal domains, to errors found over time in the HAL/S flight software which flies in the Space Shuttle. Although only a prototype-tools experiment, the result of this research appears to be extendable to all other NASA software, given appropriate data similar to that logged for the Shuttle onboard software. Our research has demonstrated that a more complete domain coverage can be mathematically demonstrated with the approach we have applied, thereby ensuring full insight into the cause-and-effects relationship between the complexity of a software system and the fault density of that system. By applying the operational profile we can characterize the dynamic effects of software path complexity under this same approach We now have the ability to measure specific attributes which have been statistically demonstrated to correlate to increased error probability, and to know which actions to take, for each complexity domain. Shuttle software verifiers can now monitor the changes in the software complexity, assess the added or decreased risk of software faults in modified code, and determine necessary corrections. The reports, tool documentation, user's guides, and new approach that have resulted from this research effort represent advances in the state of the art of software quality and reliability assurance. Details describing how to apply this technique to other NASA code are contained in this document.
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
Burkhard, John Patrik Matthias; Dietrich, Ariella Denise; Jacobsen, Christine; Roos, Malgorzota; Lübbers, Heinz-Theo; Obwegeser, Joachim Anton
2014-10-01
This study aimed to compare the reliability of three different imaging software programs for measuring the PAS and concurrently to investigate the morphological changes in oropharyngeal structures in mandibular prognathic patients before and after orthognathic surgery by using 2D and 3D analyzing technique. The study consists of 11 randomly chosen patients (8 females and 3 males) who underwent maxillomandibular treatment for correction of Class III anteroposterior mandibular prognathism at the University Hospital in Zurich. A set of standardized LCR and CBCT-scans were obtained from each subject preoperatively (T0), 3 months after surgery (T1) and 3 months to 2 years postoperatively (T2). Morphological changes in the posterior airway space (PAS) were evaluated longitudinally by two different observers with three different imaging software programs (OsiriX(®) 64-bit, Switzerland; Mimics(®), Belgium; BrainLab(®), Germany) and manually by analyzing cephalometric X-rays. A significant increase in the upper airway dimensions before and after surgery occurred in all measured cases. All other cephalometric distances showed no statistically significant alterations. Measuring the volume of the PAS showed no significant changes in all cases. All three software programs showed similar outputs in both cephalometric analysis and 3D measuring technique. A 3D design of the posterior airway seems to be far more reliable and precise phrasing of a statement of postoperative gradients than conventional radiography and is additionally higher compared to the corresponding manual method. In case of Class III mandibular prognathism treatment with bilateral split osteotomy of the mandible and simultaneous maxillary advancement, the negative effects of PAS volume decrease may be reduced and might prevent a developing OSAS. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Barth, Martin; Weiß, Christel; Brenke, Christopher; Schmieder, Kirsten
2017-04-01
Software-based planning of a spinal implant inheres in the promise of precision and superior results. The purpose of the study was to analyze the measurement reliability, prognostic value, and scientific use of a surgical planning software in patients receiving anterior cervical discectomy and fusion (ACDF). Lateral neutral, flexion, and extension radiographs of patients receiving tailored cages as suggested by the planning software were available for analysis. Differences of vertebral wedging angles and segmental height of all cervical segments were determined at different timepoints using intraclass correlation coefficients (ICC). Cervical lordosis (C2/C7), segmental heights, global, and segmental range of motion (ROM) were determined at different timepoints. Clinical and radiological variables were correlated 12 months after surgery. 282 radiographs of 35 patients with a mean age of 53.1 ± 12.0 years were analyzed. Measurement of segmental height was highly accurate with an ICC near to 1, but angle measurements showed low ICC values. Likewise, the ICCs of the prognosticated values were low. Postoperatively, there was a significant decrease of segmental height (p < 0.0001) and loss of C2/C7 ROM (p = 0.036). ROM of unfused segments also significantly decreased (p = 0.016). High NDI was associated with low subsidence rates. The surgical planning software showed high accuracy in the measurement of height differences and lower accuracy values with angle measurements. Both the prognosticated height and angle values were arbitrary. Global ROM, ROM of the fused and intact segments, is restricted after ACDF.
Securing Ground Data System Applications for Space Operations
NASA Technical Reports Server (NTRS)
Pajevski, Michael J.; Tso, Kam S.; Johnson, Bryan
2014-01-01
The increasing prevalence and sophistication of cyber attacks has prompted the Multimission Ground Systems and Services (MGSS) Program Office at Jet Propulsion Laboratory (JPL) to initiate the Common Access Manager (CAM) effort to protect software applications used in Ground Data Systems (GDSs) at JPL and other NASA Centers. The CAM software provides centralized services and software components used by GDS subsystems to meet access control requirements and ensure data integrity, confidentiality, and availability. In this paper we describe the CAM software; examples of its integration with spacecraft commanding software applications and an information management service; and measurements of its performance and reliability.
The Infeasibility of Experimental Quantification of Life-Critical Software Reliability
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.
Czarnota, Judith; Hey, Jeremias; Fuhrmann, Robert
2016-01-01
The purpose of this work was to determine the reliability and validity of measurements performed on digital models with a desktop scanner and analysis software in comparison with measurements performed manually on conventional plaster casts. A total of 20 pairs of plaster casts reflecting the intraoral conditions of 20 fully dentate individuals were digitized using a three-dimensional scanner (D700; 3Shape). A series of defined parameters were measured both on the resultant digital models with analysis software (Ortho Analyzer; 3Shape) and on the original plaster casts with a digital caliper (Digimatic CD-15DCX; Mitutoyo). Both measurement series were repeated twice and analyzed for intrarater reliability based on intraclass correlation coefficients (ICCs). The results from the digital models were evaluated for their validity against the casts by calculating mean-value differences and associated 95 % limits of agreement (Bland-Altman method). Statistically significant differences were identified via a paired t test. Significant differences were obtained for 16 of 24 tooth-width measurements, for 2 of 5 sites of contact-point displacement in the mandibular anterior segment, for overbite, for maxillary intermolar distance, for Little's irregularity index, and for the summation indices of maxillary and mandibular incisor width. Overall, however, both the mean differences between the results obtained on the digital models versus on the plaster casts and the dispersion ranges associated with these differences suggest that the deviations incurred by the digital measuring technique are not clinically significant. Digital models are adequately reproducible and valid to be employed for routine measurements in orthodontic practice.
Orfanos, Stavros; Akther, Syeda Ferhana; Abdul-Basit, Muhammad; McCabe, Rosemarie; Priebe, Stefan
2017-02-10
Research has shown that interactions in group therapies for people with schizophrenia are associated with a reduction in negative symptoms. However, it is unclear which specific interactions in groups are linked with these improvements. The aims of this exploratory study were to i) develop and test the reliability of using video-annotation software to measure interactions in group therapies in schizophrenia and ii) explore the relationship between interactions in group therapies for schizophrenia with clinically relevant changes in negative symptoms. Video-annotation software was used to annotate interactions from participants selected across nine video-recorded out-patient therapy groups (N = 81). Using the Individual Group Member Interpersonal Process Scale, interactions were coded from participants who demonstrated either a clinically significant improvement (N = 9) or no change (N = 8) in negative symptoms at the end of therapy. Interactions were measured from the first and last sessions of attendance (>25 h of therapy). Inter-rater reliability between two independent raters was measured. Binary logistic regression analysis was used to explore the association between the frequency of interactive behaviors and changes in negative symptoms, assessed using the Positive and Negative Syndrome Scale. Of the 1275 statements that were annotated using ELAN, 1191 (93%) had sufficient audio and visual quality to be coded using the Individual Group Member Interpersonal Process Scale. Rater-agreement was high across all interaction categories (>95% average agreement). A higher frequency of self-initiated statements measured in the first session was associated with improvements in negative symptoms. The frequency of questions and giving advice measured in the first session of attendance was associated with improvements in negative symptoms; although this was only a trend. Video-annotation software can be used to reliably identify interactive behaviors in groups for schizophrenia. The results suggest that proactive communicative gestures, as assessed by the video-analysis, predict outcomes. Future research should use this novel method in larger and clinically different samples to explore which aspects of therapy facilitate such proactive communication early on in therapy.
Tevatron beam position monitor upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolbers, Stephen; Banerjee, B.; Barker, B.
2005-05-01
The Tevatron Beam Position Monitor (BPM) readout electronics and software have been upgraded to improve measurement precision, functionality and reliability. The original system, designed and built in the early 1980's, became inadequate for current and future operations of the Tevatron. The upgraded system consists of 960 channels of new electronics to process analog signals from 240 BPMs, new front-end software, new online and controls software, and modified applications to take advantage of the improved measurements and support the new functionality. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiprotonmore » position measurements. Measurements using the new system are presented that demonstrate its improved resolution and overall performance.« less
Development and Evaluation of a Measure of Library Automation.
ERIC Educational Resources Information Center
Pungitore, Verna L.
1986-01-01
Construct validity and reliability estimates indicate that study designed to measure utilization of automation in public and academic libraries was successful in tentatively identifying and measuring three subdimensions of level of automation: quality of hardware, method of software development, and number of automation specialists. Questionnaire…
Technical Concept Document. Central Archive for Reusable Defense Software (CARDS)
1994-02-28
FeNbry 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive for...February 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive...accordance with the DFARS Special Works Clause Developed by: This document, developed under the Software Technology for Adaptable, Reliable Systems
Reliability of Wearable Inertial Measurement Units to Measure Physical Activity in Team Handball.
Luteberget, Live S; Holme, Benjamin R; Spencer, Matt
2018-04-01
To assess the reliability and sensitivity of commercially available inertial measurement units to measure physical activity in team handball. Twenty-two handball players were instrumented with 2 inertial measurement units (OptimEye S5; Catapult Sports, Melbourne, Australia) taped together. They participated in either a laboratory assessment (n = 10) consisting of 7 team handball-specific tasks or field assessment (n = 12) conducted in 12 training sessions. Variables, including PlayerLoad™ and inertial movement analysis (IMA) magnitude and counts, were extracted from the manufacturers' software. IMA counts were divided into intensity bands of low (1.5-2.5 m·s -1 ), medium (2.5-3.5 m·s -1 ), high (>3.5 m·s -1 ), medium/high (>2.5 m·s -1 ), and total (>1.5 m·s -1 ). Reliability between devices and sensitivity was established using coefficient of variation (CV) and smallest worthwhile difference (SWD). Laboratory assessment: IMA magnitude showed a good reliability (CV = 3.1%) in well-controlled tasks. CV increased (4.4-6.7%) in more-complex tasks. Field assessment: Total IMA counts (CV = 1.8% and SWD = 2.5%), PlayerLoad (CV = 0.9% and SWD = 2.1%), and their associated variables (CV = 0.4-1.7%) showed a good reliability, well below the SWD. However, the CV of IMA increased when categorized into intensity bands (2.9-5.6%). The reliability of IMA counts was good when data were displayed as total, high, or medium/high counts. A good reliability for PlayerLoad and associated variables was evident. The CV of the previously mentioned variables was well below the SWD, suggesting that OptimEye's inertial measurement unit and its software are sensitive for use in team handball.
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
NASA Technical Reports Server (NTRS)
Dunham, J. R. (Editor); Knight, J. C. (Editor)
1982-01-01
The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research.
Developing Confidence Limits For Reliability Of Software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1991-01-01
Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.
Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert
2017-08-01
Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.
Survey of Software Assurance Techniques for Highly Reliable Systems
NASA Technical Reports Server (NTRS)
Nelson, Stacy
2004-01-01
This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.
Intra- and interobserver reliability of quantitative ultrasound measurement of the plantar fascia.
Rathleff, Michael Skovdal; Moelgaard, Carsten; Lykkegaard Olesen, Jens
2011-01-01
To determine intra- and interobserver reliability and measurement precision of sonographic assessment of plantar fascia thickness when using one, the mean of two, or the mean of three measurements. Two experienced observers scanned 20 healthy subjects twice with 60 minutes between test and retest. A GE LOGIQe ultrasound scanner was used in the study. The built-in software in the scanner was used to measure the thickness of the plantar fascia (PF). Reliability was calculated using intraclass correlation coefficient (ICC) and limits of agreement (LOA). Intraobserver reliability (ICC) using one measurement was 0.50 for one observer and 0.52 for the other, and using the mean of three measurements intraobserver reliability increased up to 0.77 and 0.67, respectively. Interobserver reliability (ICC) when using one measurement was 0.62 and increased to 0.82 when using the average of three measurements. LOA showed that when using the average of three measurements, LOA decreased to 0.6 mm, corresponding to 17.5% of the mean thickness of the PF. The results showed that reliability increases when using the mean of three measurements compared with one. Limits of agreement based on intratester reliability shows that changes in thickness that are larger than 0.6 mm can be considered actual changes in thickness and not a result of measurement error. Copyright © 2011 Wiley Periodicals, Inc.
Measuring the impact of computer resource quality on the software development process and product
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Valett, Jon; Hall, Dana
1985-01-01
The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.
Liuzzo, Derek M; Peters, Denise M; Middleton, Addie; Lanier, Wes; Chain, Rebecca; Barksdale, Brittany; Fritz, Stacy L
Clinicians and researchers have used bathroom scales, balance performance monitors with feedback, postural scale analysis, and force platforms to evaluate weight bearing asymmetry (WBA). Now video game consoles offer a novel alternative for assessing this construct. By using specialized software, the Nintendo Wii Fit balance board can provide reliable measurements of WBA in healthy, young adults. However, reliability of measurements obtained using only the factory settings to assess WBA in older adults and individuals with stroke has not been established. To determine whether measurements of WBA obtained using the Nintendo Wii Fit balance board and default settings are reliable in older adults and individuals with stroke. Weight bearing asymmetry was assessed using the Nintendo Wii Fit balance board in 2 groups of participants-individuals older than 65 years (n = 41) and individuals with stroke (n = 41). Participants were given a standardized set of instructions and were not provided auditory or visual feedback. Two trials were performed. Intraclass correlation coefficients (ICC), standard error of measure (SEM), and minimal detectable change (MDC) scores were determined for each group. The ICC for the older adults sample was 0.59 (0.35-0.76) with SEM95 = 6.2% and MDC95 = 8.8%. The ICC for the sample including individuals with stroke was 0.60 (0.47-0.70) with SEM95 = 9.6% and MDC95 = 13.6%. Although measurements of WBA obtained using the Nintendo Wii Fit balance board, and its default factory settings, demonstrate moderate reliability in older adults and individuals with stroke, the relatively high associated SEM and MDC values substantially reduce the clinical utility of the Nintendo Wii Fit balance board as an assessment tool for WBA. Weight bearing asymmetry cannot be measured reliably in older adults and individuals with stroke using the Nintendo Wii Fit balance board without the use of specialized software.
Liuzzo, Derek M.; Peters, Denise M.; Middleton, Addie; Lanier, Wes; Chain, Rebecca; Barksdale, Brittany; Fritz, Stacy L.
2015-01-01
Background Clinicians and researchers have used bathroom scales, balance performance monitors with feedback, postural scale analysis, and force platforms to evaluate weight bearing asymmetry (WBA). Now video game consoles offer a novel alternative for assessing this construct. By using specialized software, the Nintendo Wii Fit balance board can provide reliable measurements of WBA in healthy, young adults. However, reliability of measurements obtained using only the factory settings to assess WBA in older adults and individuals with stroke has not been established. Purpose To determine whether measurements of WBA obtained using the Nintendo Wii Fit balance board and default settings are reliable in older adults and individuals with stroke. Methods Weight bearing asymmetry was assessed using the Nintendo Wii Fit balance board in 2 groups of participants—individuals older than 65 years (n = 41) and individuals with stroke (n = 41). Participants were given a standardized set of instructions and were not provided auditory or visual feedback. Two trials were performed. Intraclass correlation coefficients (ICC), standard error of measure (SEM), and minimal detectable change (MDC) scores were determined for each group. Results The ICC for the older adults sample was 0.59 (0.35–0.76) with SEM95= 6.2% and MDC95= 8.8%. The ICC for the sample including individuals with stroke was 0.60 (0.47–0.70) with SEM95= 9.6% and MDC95= 13.6%. Discussion Although measurements of WBA obtained using the Nintendo Wii Fit balance board, and its default factory settings, demonstrate moderate reliability in older adults and individuals with stroke, the relatively high associated SEM and MDC values substantially reduce the clinical utility of the Nintendo Wii Fit balance board as an assessment tool for WBA. Conclusions Weight bearing asymmetry cannot be measured reliably in older adults and individuals with stroke using the Nintendo Wii Fit balance board without the use of specialized software. PMID:26288237
NASA Technical Reports Server (NTRS)
Orr, James K.; Peltier, Daryl
2010-01-01
Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.
Bisi-Balogun, Adebisi; Cassel, Michael; Mayer, Frank
2016-04-13
This study aimed to determine the relative and absolute reliability of ultrasound (US) measurements of the thickness and echogenicity of the plantar fascia (PF) at different measurement stations along its length using a standardized protocol. Twelve healthy subjects (24 feet) were enrolled. The PF was imaged in the longitudinal plane. Subjects were assessed twice to evaluate the intra-rater reliability. A quantitative evaluation of the thickness and echogenicity of the plantar fascia was performed using Image J, a digital image analysis and viewer software. A sonography evaluation of the thickness and echogenicity of the PF showed a high relative reliability with an Intra class correlation coefficient of ≥0.88 at all measurement stations. However, the measurement stations for both the PF thickness and echogenicity which showed the highest intraclass correlation coefficient (ICCs) did not have the highest absolute reliability. Compared to other measurement stations, measuring the PF thickness at 3 cm distal and the echogenicity at a region of interest 1 cm to 2 cm distal from its insertion at the medial calcaneal tubercle showed the highest absolute reliability with the least systematic bias and random error. Also, the reliability was higher using a mean of three measurements compared to one measurement. To reduce discrepancies in the interpretation of the thickness and echogenicity measurements of the PF, the absolute reliability of the different measurement stations should be considered in clinical practice and research rather than the relative reliability with the ICC.
Bisi-Balogun, Adebisi; Cassel, Michael; Mayer, Frank
2016-01-01
This study aimed to determine the relative and absolute reliability of ultrasound (US) measurements of the thickness and echogenicity of the plantar fascia (PF) at different measurement stations along its length using a standardized protocol. Twelve healthy subjects (24 feet) were enrolled. The PF was imaged in the longitudinal plane. Subjects were assessed twice to evaluate the intra-rater reliability. A quantitative evaluation of the thickness and echogenicity of the plantar fascia was performed using Image J, a digital image analysis and viewer software. A sonography evaluation of the thickness and echogenicity of the PF showed a high relative reliability with an Intra class correlation coefficient of ≥0.88 at all measurement stations. However, the measurement stations for both the PF thickness and echogenicity which showed the highest intraclass correlation coefficient (ICCs) did not have the highest absolute reliability. Compared to other measurement stations, measuring the PF thickness at 3 cm distal and the echogenicity at a region of interest 1 cm to 2 cm distal from its insertion at the medial calcaneal tubercle showed the highest absolute reliability with the least systematic bias and random error. Also, the reliability was higher using a mean of three measurements compared to one measurement. To reduce discrepancies in the interpretation of the thickness and echogenicity measurements of the PF, the absolute reliability of the different measurement stations should be considered in clinical practice and research rather than the relative reliability with the ICC. PMID:27089369
An experimental investigation of fault tolerant software structures in an avionics application
NASA Technical Reports Server (NTRS)
Caglayan, Alper K.; Eckhardt, Dave E., Jr.
1989-01-01
The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.
3D photography is as accurate as digital planimetry tracing in determining burn wound area.
Stockton, K A; McMillan, C M; Storey, K J; David, M C; Kimble, R M
2015-02-01
In the paediatric population careful attention needs to be made concerning techniques utilised for wound assessment to minimise discomfort and stress to the child. To investigate whether 3D photography is a valid measure of burn wound area in children compared to the current clinical gold standard method of digital planimetry using Visitrak™. Twenty-five children presenting to the Stuart Pegg Paediatric Burn Centre for burn dressing change following acute burn injury were included in the study. Burn wound area measurement was undertaken using both digital planimetry (Visitrak™ system) and 3D camera analysis. Inter-rater reliability of the 3D camera software was determined by three investigators independently assessing the burn wound area. A comparison of wound area was assessed using intraclass correlation co-efficients (ICC) which demonstrated excellent agreement 0.994 (CI 0.986, 0.997). Inter-rater reliability measured using ICC 0.989 (95% CI 0.979, 0.995) demonstrated excellent inter-rater reliability. Time taken to map the wound was significantly quicker using the camera at bedside compared to Visitrak™ 14.68 (7.00)s versus 36.84 (23.51)s (p<0.001). In contrast, analysing wound area was significantly quicker using the Visitrak™ tablet compared to Dermapix(®) software for the 3D Images 31.36 (19.67)s versus 179.48 (56.86)s (p<0.001). This study demonstrates that images taken with the 3D LifeViz™ camera and assessed with Dermapix(®) software is a reliable method for wound area assessment in the acute paediatric burn setting. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
A research review of quality assessment for software
NASA Technical Reports Server (NTRS)
1991-01-01
Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.
NASA Technical Reports Server (NTRS)
Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David
1990-01-01
This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.
A Review of Software Maintenance Technology.
1980-02-01
LABS HONEYWELL, REF BURROUGHS, OTHERS BOOLE £ LANGUAGE "PE BABBAGE , OPERATIONAL INDEPENDENT 4.2.17 INC. IBM H M RELIABILITY MOST LARGE U. MEASUREMENT...80-13 ML llluuluunuuuuu SmeeI..... f. Maintenance Experience (1) Multiple Implementation Charles Holmes (Source 2) described two attempts at McDonnell...proprietary software monitor package distributed by Boole and Babbage , Inc., Sunnyvale, California. it has been implemented on IBM computers and is language
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement.
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-03-01
Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Photographs were found to have high reliability coefficient (P > 0.05). The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement.
NASA Technical Reports Server (NTRS)
Wilson, Larry W.
1989-01-01
The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
Confirmatory Factor Analysis Alternative: Free, Accessible CBID Software.
Bott, Marjorie; Karanevich, Alex G; Garrard, Lili; Price, Larry R; Mudaranthakam, Dinesh Pal; Gajewski, Byron
2018-02-01
New software that performs Classical and Bayesian Instrument Development (CBID) is reported that seamlessly integrates expert (content validity) and participant data (construct validity) to produce entire reliability estimates with smaller sample requirements. The free CBID software can be accessed through a website and used by clinical investigators in new instrument development. Demonstrations are presented of the three approaches using the CBID software: (a) traditional confirmatory factor analysis (CFA), (b) Bayesian CFA using flat uninformative prior, and (c) Bayesian CFA using content expert data (informative prior). Outcomes of usability testing demonstrate the need to make the user-friendly, free CBID software available to interdisciplinary researchers. CBID has the potential to be a new and expeditious method for instrument development, adding to our current measurement toolbox. This allows for the development of new instruments for measuring determinants of health in smaller diverse populations or populations of rare diseases.
Trends in software reliability for digital flight control
NASA Technical Reports Server (NTRS)
Hecht, H.; Hecht, M.
1983-01-01
Software error data of major recent Digital Flight Control Systems Development Programs. The report summarizes the data, compare these data with similar data from previous surveys and identifies trends and disciplines to improve software reliability.
Cost Estimation of Software Development and the Implications for the Program Manager
1992-06-01
Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
Distribution System Reliability Analysis for Smart Grid Applications
NASA Astrophysics Data System (ADS)
Aljohani, Tawfiq Masad
Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul
2017-01-01
Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
MTL distributed magnet measurement system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J.M.; Craker, P.A.; Garbarini, J.P.
1993-04-01
The Magnet Test Laboratory (MTL) at the Superconducting Super collider Laboratory will be required to precisely and reliably measure properties of magnets in a production environment. The extensive testing of the superconducting magnets comprises several types of measurements whose main purpose is to evaluate some basic parameters characterizing magnetic, mechanic and cryogenic properties of magnets. The measurement process will produce a significant amount of data which will be subjected to complex analysis. Such massive measurements require a careful design of both the hardware and software of computer systems, having in mind a reliable, maximally automated system. In order to fulfillmore » this requirement a dedicated Distributed Magnet Measurement System (DMMS) is being developed.« less
3D photography is a reliable method of measuring infantile haemangioma volume over time.
Robertson, Sarah A; Kimble, Roy M; Storey, Kristen J; Gee Kee, Emma L; Stockton, Kellie A
2016-09-01
Infantile haemangiomas are common lesions of infancy. With the development of novel treatments utilised to accelerate their regression, there is a need for a method of assessing these lesions over time. Volume is an ideal assessment method because of its quantifiable nature. This study investigated whether 3D photography is a valid tool for measuring the volume of infantile haemangiomas over time. Thirteen children with infantile haemangiomas presenting to the Vascular Anomalies Clinic, Royal Children's Hospital/Lady Cilento Children's Hospital treated with propranolol were included in the study. Lesion volume was assessed using 3D photography at presentation, one month and three months follow up. Intrarater reliability was determined by retracing all images several months after the initial mapping. Interrater reliability of the 3D camera software was determined by two investigators, blinded to each other's results, independently assessing infantile haemangioma volume. Lesion volume decreased significantly between presentation and three-month follow-up (p<0.001). Volume intra- and interrater reliability were excellent with ICC 0.991 (95% CI 0.982, 0.995) and 0.978 (95% CI 0.955, 0.989), respectively. This study demonstrates images taken with the 3D LifeViz™ camera and lesion volume calculated with Dermapix® software is a reliable method for assessing infantile haemangioma volume over time. Copyright © 2016 Elsevier Inc. All rights reserved.
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-01-01
Objectives: Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Materials and Methods: Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Results: Photographs were found to have high reliability coefficient (P > 0.05). Conclusion: The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement. PMID:26622272
NASA Astrophysics Data System (ADS)
Gurov, V. V.
2017-01-01
Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.
Glavas, Panagiotis; Mac-Thiong, Jean-Marc; Parent, Stefan; de Guise, Jacques A.
2008-01-01
Although recognized as an important aspect in the management of spondylolisthesis, there is no consensus on the most reliable and optimal measure of lumbosacral kyphosis (LSK). Using a custom computer software, four raters evaluated 60 standing lateral radiographs of the lumbosacral spine during two sessions at a 1-week interval. The sample size consisted of 20 normal, 20 low and 20 high grade spondylolisthetic subjects. Six parameters were included for analysis: Boxall’s slip angle, Dubousset’s lumbosacral angle (LSA), the Spinal Deformity Study Group’s (SDSG) LSA, dysplastic SDSG LSA, sagittal rotation (SR), kyphotic Cobb angle (k-Cobb). Intra- and inter-rater reliability for all parameters was assessed using intra-class correlation coefficients (ICC). Correlations between parameters and slip percentage were evaluated with Pearson coefficients. The intra-rater ICC’s for all the parameters ranged between 0.81 and 0.97 and the inter-rater ICC’s were between 0.74 and 0.98. All parameters except sagittal rotation showed a medium to large correlation with slip percentage. Dubousset’s LSA and the k-Cobb showed the largest correlations (r = −0.78 and r = −0.50, respectively). SR was associated with the weakest correlation (r = −0.10). All other parameters had medium correlations with percent slip (r = 0.31–0.43). All measurement techniques provided excellent inter- and intra-rater reliability. Dubousset’s LSA showed the strongest correlation with slip grade. This parameter can be used in the clinical setting with PACS software capabilities to assess LSK. A computer-assisted technique is recommended in order to increase the reliability of the measurement of LSK in spondylolisthesis. PMID:19015898
Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás
2012-11-01
Three-dimensional (3D) deformations of the spine are predominantly characterized by two-dimensional (2D) angulation measurements in coronal and sagittal planes, using anteroposterior and lateral X-ray images. For coronal curves, a method originally described by Cobb and for sagittal curves a modified Cobb method are most widely used in practice, and these methods have been shown to exhibit good-to-excellent reliability and reproducibility, carried out either manually or by computer-based tools. Recently, an ultralow radiation dose-integrated radioimaging solution was introduced with special software for realistic 3D visualization and parametric characterization of the spinal column. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and sterEOS 3D measurements in a routine clinical setting. Retrospective nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4° and 117.5°. Analysis of accuracy and reliability of measurements were carried out on a group of all patients and in subgroups based on coronal plane deviation: 0° to 10° (Group 1, n=36), 10° to 25° (Group 2, n=25), 25° to 50° (Group 3, n=69), 50° to 75° (Group 4, n=49), and more than 75° (Group 5, n=22). Coronal and sagittal curvature measurements were determined by three experienced examiners, using either traditional 2D methods or automatic measurements based on sterEOS 3D reconstructions. Manual measurements were performed three times, and sterEOS 3D reconstructions and automatic measurements were performed two times by each examiner. Means comparison t test, Pearson bivariate correlation analysis, reliability analysis by intraclass correlation coefficients for intraobserver reproducibility and interrater reliability were performed using SPSS v16.0 software (IBM Corp., Armonk, NY, USA). No funds were received in support of this work. No benefits in any form have been or will be received from a commercial party related directly or indirectly to the subject of this article. In comparison with manual 2D methods, only small and nonsignificant differences were detectable in sterEOS 3D-based curvature data. Intraobserver reliability was excellent for both methods, and interrater reproducibility was consistently higher for sterEOS 3D methods that was found to be unaffected by the magnitude of coronal curves or sagittal plane deviations. This is the first clinical report on EOS 2D/3D system (EOS Imaging, Paris, France) and its sterEOS 3D software, documenting an excellent capability for accurate, reliable, and reproducible spinal curvature measurements. Copyright © 2012 Elsevier Inc. All rights reserved.
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
ERIC Educational Resources Information Center
Mier, Constance M.
2011-01-01
The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…
Li, Guanghui; Wei, Jianhua; Wang, Xi; Wu, Guofeng; Ma, Dandan; Wang, Bo; Liu, Yanpu; Feng, Xinghua
2013-08-01
Cleft lip in the presence or absence of a cleft palate is a major public health problem. However, few studies have been published concerning the soft-tissue morphology of cleft lip infants. Currently, obtaining reliable three-dimensional (3D) surface models of infants remains a challenge. The aim of this study was to investigate a new way of capturing 3D images of cleft lip infants using a structured light scanning system. In addition, the accuracy and precision of the acquired facial 3D data were validated and compared with direct measurements. Ten unilateral cleft lip patients were enrolled in the study. Briefly, 3D facial images of the patients were acquired using a 3D scanner device before and after the surgery. Fourteen items were measured by direct anthropometry and 3D image software. The accuracy and precision of the 3D system were assessed by comparative analysis. The anthropometric data obtained using the 3D method were in agreement with the direct anthropometry measurements. All data calculated by the software were 'highly reliable' or 'reliable', as defined in the literature. The localisation of four landmarks was not consistent in repeated experiments of inter-observer reliability in preoperative images (P<0.05), while the intra-observer reliability in both pre- and postoperative images was good (P>0.05). The structured light scanning system is proven to be a non-invasive, accurate and precise method in cleft lip anthropometry. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Mantovani, Giulia; Pifferi, Massimo; Vozzi, Giovanni
2010-06-01
Patients with primary ciliary dyskinesia (PCD) have structural and/or functional alterations of cilia that imply deficits in mucociliary clearance and different respiratory pathologies. A useful indicator for the difficult diagnosis is the ciliary beat frequency (CBF) that is significantly lower in pathological cases than in physiological ones. The CBF computation is not rapid, therefore, the aim of this study is to propose an automated method to evaluate it directly from videos of ciliated cells. The cells are taken from inferior nasal turbinates and videos of ciliary movements are registered and eventually processed by the developed software. The software consists in the extraction of features from videos (written with C++ language) and the computation of the frequency (written with Matlab language). This system was tested both on the samples of nasal cavity and software models, and the results were really promising because in a few seconds, it can compute a reliable frequency if compared with that measured with visual methods. It is to be noticed that the reliability of the computation increases with the quality of acquisition system and especially with the sampling frequency. It is concluded that the developed software could be a useful mean for PCD diagnosis.
A study of software standards used in the avionics industry
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1994-01-01
Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.
Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device
NASA Astrophysics Data System (ADS)
Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin
A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.
Infusing Reliability Techniques into Software Safety Analysis
NASA Technical Reports Server (NTRS)
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Singendonk, M M J; Smits, M J; Heijting, I E; van Wijk, M P; Nurko, S; Rosen, R; Weijenborg, P W; Abu-Assi, R; Hoekman, D R; Kuizenga-Wessel, S; Seiboth, G; Benninga, M A; Omari, T I; Kritas, S
2015-02-01
The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort. Thirty pediatric solid state HRM recordings (13M; mean age 12.1 ± 5.1 years) assessing 10 liquid swallows per patient were analyzed twice by 11 raters (six experts, five non-experts). Software-placed anatomical landmarks required manual adjustment or removal. Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), contractile front velocity (CFV), distal latency (DL) and break size (BS), and an overall CC diagnosis were software-generated. In addition, raters provided their subjective CC diagnosis. Reliability was calculated with Cohen's and Fleiss' kappa (κ) and intraclass correlation coefficient (ICC). Intra- and interrater reliability of software-generated CC diagnosis after manual adjustment of landmarks was substantial (mean κ = 0.69 and 0.77 respectively) and moderate-substantial for subjective CC diagnosis (mean κ = 0.70 and 0.58 respectively). Reliability of both software-generated and subjective diagnosis of normal motility was high (κ = 0.81 and κ = 0.79). Intra- and interrater reliability were excellent for IRP4s, DCI, and BS. Experts had higher interrater reliability than non-experts for DL (ICC = 0.65 vs ICC = 0.36 respectively) and the software-generated diagnosis diffuse esophageal spasm (DES, κ = 0.64 vs κ = 0.30). Among experts, the reliability for the subjective diagnosis of achalasia and esophageal gastric junction outflow obstruction was moderate-substantial (κ = 0.45-0.82). Inter- and intrarater reliability of software-based CC diagnosis of pediatric HRM recordings was high overall. However, experience was a factor influencing the diagnosis of some motility disorders, particularly DES and achalasia. © 2014 John Wiley & Sons Ltd.
Software dependability in the Tandem GUARDIAN system
NASA Technical Reports Server (NTRS)
Lee, Inhwan; Iyer, Ravishankar K.
1995-01-01
Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.
Bedekar, Nilima; Suryawanshi, Mayuri; Rairikar, Savita; Sancheti, Parag; Shyam, Ashok
2014-01-01
Evaluation of range of motion (ROM) is integral part of assessment of musculoskeletal system. This is required in health fitness and pathological conditions; also it is used as an objective outcome measure. Several methods are described to check spinal flexion range of motion. Different methods for measuring spine ranges have their advantages and disadvantages. Hence, a new device was introduced in this study using the method of dual inclinometer to measure lumbar spine flexion range of motion (ROM). To determine Intra and Inter-rater reliability of mobile device goniometer in measuring lumbar flexion range of motion. iPod mobile device with goniometer software was used. The part being measure i.e the back of the subject was suitably exposed. Subject was standing with feet shoulder width apart. Spinous process of second sacral vertebra S2 and T12 were located, these were used as the reference points and readings were taken. Three readings were taken for each: inter-rater reliability as well as the intra-rater reliability. Sufficient rest was given between each flexion movement. Intra-rater reliability using ICC was r=0.920 and inter-rater r=0.812 at CI 95%. Validity r=0.95. Mobile device goniometer has high intra-rater reliability. The inter-rater reliability was moderate. This device can be used to assess range of motion of spine flexion, representing uni-planar movement.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
Leveraging Code Comments to Improve Software Reliability
ERIC Educational Resources Information Center
Tan, Lin
2009-01-01
Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…
Hardware and software reliability estimation using simulations
NASA Technical Reports Server (NTRS)
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Software-assisted small bowel motility analysis using free-breathing MRI: feasibility study.
Bickelhaupt, Sebastian; Froehlich, Johannes M; Cattin, Roger; Raible, Stephan; Bouquet, Hanspeter; Bill, Urs; Patak, Michael A
2014-01-01
To validate a software prototype allowing for small bowel motility analysis in free breathing by comparing it to manual measurements. In all, 25 patients (15 male, 10 female; mean age 39 years) were included in this Institutional Review Board-approved, retrospective study. Magnetic resonance imaging (MRI) was performed on a 1.5T system after standardized preparation acquiring motility sequences in free breathing over 69-84 seconds. Small bowel motility was analyzed manually and with the software. Functional parameters, measurement time, and reproducibility were compared using the coefficient of variance and paired Student's t-test. Correlation was analyzed using Pearson's correlation coefficient and linear regression. The 25 segments were analyzed twice both by hand and using the software with automatic breathing correction. All assessed parameters significantly correlated between the methods (P < 0.01), but the scattering of repeated measurements was significantly (P < 0.01) lower using the software (3.90%, standard deviation [SD] ± 5.69) than manual examinations (9.77%, SD ± 11.08). The time needed was significantly less (P < 0.001) with the software (4.52 minutes, SD ± 1.58) compared to manual measurement, lasting 17.48 minutes for manual (SD ± 1.75 minutes). The use of the software proves reliable and faster small bowel motility measurements in free-breathing MRI compared to manual analyses. The new technique allows for analyses of prolonged sequences acquired in free breathing, improving the informative value of the examinations by amplifying the evaluable data. Copyright © 2013 Wiley Periodicals, Inc.
Reliability and accuracy of Crystaleye spectrophotometric system.
Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping
2010-01-01
to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1992-01-01
Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.
Smith, R P; Dias, J J; Ullah, A; Bhowal, B
2009-05-01
Corrective surgery for Dupuytren's disease represents a significant proportion of a hand surgeon's workload. The decision to go ahead with surgery and the success of surgery requires measuring the degree of contracture of the diseased finger(s). This is performed in clinic with a goniometer, pre- and postoperatively. Monitoring the recurrence of the contracture can inform on surgical outcome, research and audit. We compared visual and computer software-aided estimation of Dupuytren's contractures to clinical goniometric measurements in 60 patients with Dupuytren's disease. Patients' hands were digitally photographed. There were 76 contracted finger joints--70 proximal interphalangeal joints and six distal interphalangeal joints. The degrees of contracture of these images were visually assessed by six orthopaedic staff of differing seniority and re-assessed with computer software. Across assessors, the Pearson correlation between the goniometric measurements and the visual estimations was 0.83 and this significantly improved to 0.88 with computer software. Reliability with intra-class correlations achieved 0.78 and 0.92 for the visual and computer-aided estimations, respectively, and with test-retest analysis, 0.92 for visual estimation and 0.95 for computer-aided measurements. Visual estimations of Dupuytren's contractures correlate well with actual clinical goniometric measurements and improve further if measured with computer software. Digital images permit monitoring of contracture after surgery and may facilitate research into disease progression and auditing of surgical technique.
Real-time software failure characterization
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Finelli, George B.
1990-01-01
A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.
Jones, Nia W; Raine-Fenning, Nick J; Mousa, Hatem A; Bradley, Eileen; Bugg, George J
2011-03-01
Three-dimensional (3-D) power Doppler angiography (3-D-PDA) allows visualisation of Doppler signals within the placenta and their quantification is possible by the generation of vascular indices by the 4-D View software programme. This study aimed to investigate intra- and interobserver reproducibility of 3-D-PDA analysis of stored datasets at varying gestations with the ultimate goal being to develop a tool for predicting placental dysfunction. Women with an uncomplicated, viable singleton pregnancy were scanned at 12, 16 or 20 weeks gestational age groups. 3-D-PDA datasets acquired of the whole placenta were analysed using the VOCAL software processing tool. Each volume was analysed by three observers twice in the A plane. Intra- and interobserver reliability was assessed by intraclass correlation coefficients (ICCs) and Bland Altman plots. At each gestational age group, 20 low risk women were scanned resulting in 60 datasets in total. The ICC demonstrated a high level of measurement reliability at each gestation with intraobserver values >0.90 and interobserver values of >0.6 for the vascular indices. Bland Altman plots also showed high levels of agreement. Systematic bias was seen at 20 weeks in the vascular indices obtained by different observers. This study demonstrates that 3-D-PDA data can be measured reliably by different observers from stored datasets up to 18 weeks gestation. Measurements become less reliable as gestation advances with bias between observers evident at 20 weeks. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Preliminary design of the redundant software experiment
NASA Technical Reports Server (NTRS)
Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John
1985-01-01
The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.
Reliability Engineering for Service Oriented Architectures
2013-02-01
Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit
Predicting Software Assurance Using Quality and Reliability Measures
2014-12-01
errors are not found in unit testing . The rework effort to correct requirement and design problems in later phases can be as high as 300 to 1,000...Literature 31 Appendix B: Quality Cannot Be Tested In 35 Bibliography 38 CMU/SEI-2014-TN-026 | ii CMU/SEI-2014-TN-026 | iii List of Figures...Removal Densities During Development 10 Figure 8: Quality and Security-Focused Workflow 14 Figure 9: Testing Reliability Results for the Largest Project
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.
Assistant for Specifying Quality Software (ASQS) Mission Area Analysis
1990-12-01
somewhat arbitrary, it was a reasonable and fast approach for partitioning the mission and software domains. The MAD builds on work done by Boeing Aerospace...Reliability ++ Reliability +++ Response 2: NO Discussion: A NO response implies intermittent burns -- most likely to perform attitude control functions...Propulsion Reliability +++ Reliability ++ 4-15 4.8.3 Query BT.3 Query: For intermittent thruster firing requirements, will the average burn time be less than
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Gray, Aaron D; Willis, Brad W; Skubic, Marjorie; Huo, Zhiyu; Razu, Swithin; Sherman, Seth L; Guess, Trent M; Jahandar, Amirhossein; Gulbrandsen, Trevor R; Miller, Scott; Siesener, Nathan J
Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a "gold standard" Vicon motion analysis system. Descriptive laboratory study. Level 2. Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.
ERIC Educational Resources Information Center
Zhang, Zhiyong; Yuan, Ke-Hai
2016-01-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…
ERIC Educational Resources Information Center
Zhang, Zhiyong; Yuan, Ke-Hai
2016-01-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…
Computation of the Molenaar Sijtsma Statistic
NASA Astrophysics Data System (ADS)
Andries van der Ark, L.
The Molenaar Sijtsma statistic is an estimate of the reliability of a test score. In some special cases, computation of the Molenaar Sijtsma statistic requires provisional measures. These provisional measures have not been fully described in the literature, and we show that they have not been implemented in the software. We describe the required provisional measures as to allow the computation of the Molenaar Sijtsma statistic for all data sets.
Validity of the Microsoft Kinect for measurement of neck angle: comparison with electrogoniometry.
Allahyari, Teimour; Sahraneshin Samani, Ali; Khalkhali, Hamid-Reza
2017-12-01
Considering the importance of evaluating working postures, many techniques and tools have been developed to identify and eliminate awkward postures and prevent musculoskeletal disorders (MSDs). The introduction of the Microsoft Kinect sensor, which is a low-cost, easy to set up and markerless motion capture system, offers promising possibilities for postural studies. Considering the Kinect's special ability in head-pose and facial-expression tracking and complexity of cervical spine movements, this study aimed to assess concurrent validity of the Microsoft Kinect against an electrogoniometer for neck angle measurements. A special software program was developed to calculate the neck angle based on Kinect skeleton tracking data. Neck angles were measured simultaneously by electrogoniometer and the developed software program in 10 volunteers. The results were recorded in degrees and the time required for each method was also measured. The Kinect's ability to identify body joints was reliable and precise. There was moderate to excellent agreement between the Kinect-based method and the electrogoniometer (paired-sample t test, p ≥ 0.25; intraclass correlation for test-retest reliability, ≥0.75). Kinect-based measurement was much faster and required less equipment, but accurate measurement with Microsoft Kinect was only possible if the participant was in its field of view.
Validation of highly reliable, real-time knowledge-based systems
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1988-01-01
Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.
Software system design for the non-null digital Moiré interferometer
NASA Astrophysics Data System (ADS)
Chen, Meng; Hao, Qun; Hu, Yao; Wang, Shaopu; Li, Tengfei; Li, Lin
2016-11-01
Aspheric optical components are an indispensable part of modern optics systems. With the development of aspheric optical elements fabrication technique, high-precision figure error test method of aspheric surfaces is a quite urgent issue now. We proposed a digital Moiré interferometer technique (DMIT) based on partial compensation principle for aspheric and freeform surface measurement. Different from traditional interferometer, DMIT consists of a real and a virtual interferometer. The virtual interferometer is simulated with Zemax software to perform phase-shifting and alignment. We can get the results by a series of calculation with the real interferogram and virtual interferograms generated by computer. DMIT requires a specific, reliable software system to ensure its normal work. Image acquisition and data processing are two important parts in this system. And it is also a challenge to realize the connection between the real and virtual interferometer. In this paper, we present a software system design for DMIT with friendly user interface and robust data processing features, enabling us to acquire the figure error of the measured asphere. We choose Visual C++ as the software development platform and control the ideal interferometer by using hybrid programming with Zemax. After image acquisition and data transmission, the system calls image processing algorithms written with Matlab to calculate the figure error of the measured asphere. We test the software system experimentally. In the experiment, we realize the measurement of an aspheric surface and prove the feasibility of the software system.
DeSmet, A; Bastiaensens, S; Van Cleemput, K; Poels, K; Vandebosch, H; Deboutte, G; Herrewijn, L; Malliet, S; Pabian, S; Van Broeckhoven, F; De Troyer, O; Deglorie, G; Van Hoecke, S; Samyn, K; De Bourdeaudhuij, I
2018-06-01
.This paper describes the items, scale validity and scale reliability of a self-report questionnaire that measures bystander behavior in cyberbullying incidents among adolescents, and its behavioral determinants. Determinants included behavioral intention, behavioral attitudes, moral disengagement attitudes, outcome expectations, self-efficacy, subjective norm and social skills. Questions also assessed (cyber-)bullying involvement. Validity and reliability information is based on a sample of 238 adolescents (M age=13.52 years, SD=0.57). Construct validity was assessed using Confirmatory Factor Analysis (CFA) or Exploratory Factor Analysis (EFA) in Mplus7 software. Reliability (Cronbach Alpha, α) was assessed in SPSS, version 22. Data and questionnaire are included in this article. Further information can be found in DeSmet et al. (2018) [1].
Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W
2015-06-01
The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.
Using image J to document healing in ulcers of the foot in diabetes.
Jeffcoate, William J; Musgrove, Alison J; Lincoln, Nadina B
2017-12-01
The aim of the study was to assess the reliability of measuring the cross-sectional area of diabetic foot ulcers using Image J software. The inter- and intra-rater reliability of ulcer area measures were assessed using digital images of acetate tracings of ulcers of the foot affecting 31 participants in an off-loading randomised trial. Observations were made independently by five specialist podiatrists, one of whom was experienced in the use of Image J software and educated the other four in a single session. The mean (±SD) of the mean cross-sectional areas of the 31 ulcers determined independently by the five observers was 1386·7 (±22·7) mm 2 . The correlation between all pairs of observers was >0·99 (P < 0·001). There was no significant difference overall between the five observers (ANOVA F1.538; P = 0·165) and no difference between any two (paired samples test t = -0·787-1·396; P ≥ 0·088). The correlation between the areas determined by two observers on two occasions separated by not less than 1 week was very high (0·997 and 0·999; P < 0·001 and <0·001, respectively). The inter- and intra-reliability of the Image J software is very high, with no evidence of a difference either between or within observers. This technique should be considered for both research and clinical use in order to document changes in ulcer area. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Assessment of physical server reliability in multi cloud computing system
NASA Astrophysics Data System (ADS)
Kalyani, B. J. D.; Rao, Kolasani Ramchand H.
2018-04-01
Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Woynaroski, Tiffany; Oller, D Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul
2017-03-01
Theory and research suggest that vocal development predicts "useful speech" in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently "in development" and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. Autism Res 2017, 10: 508-519. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
A Human Reliability Based Usability Evaluation Method for Safety-Critical Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillippe Palanque; Regina Bernhaupt; Ronald Boring
2006-04-01
Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less
Cryogenic instrumentation for ITER magnets
NASA Astrophysics Data System (ADS)
Poncet, J.-M.; Manzagol, J.; Attard, A.; André, J.; Bizel-Bizellot, L.; Bonnay, P.; Ercolani, E.; Luchier, N.; Girard, A.; Clayton, N.; Devred, A.; Huygen, S.; Journeaux, J.-Y.
2017-02-01
Accurate measurements of the helium flowrate and of the temperature of the ITER magnets is of fundamental importance to make sure that the magnets operate under well controlled and reliable conditions, and to allow suitable helium flow distribution in the magnets through the helium piping. Therefore, the temperature and flow rate measurements shall be reliable and accurate. In this paper, we present the thermometric chains as well as the venturi flow meters installed in the ITER magnets and their helium piping. The presented thermometric block design is based on the design developed by CERN for the LHC, which has been further optimized via thermal simulations carried out by CEA. The electronic part of the thermometric chain was entirely developed by the CEA and will be presented in detail: it is based on a lock-in measurement and small signal amplification, and also provides a web interface and software to an industrial PLC. This measuring device provides a reliable, accurate, electromagnetically immune, and fast (up to 100 Hz bandwidth) system for resistive temperature sensors between a few ohms to 100 kΩ. The flowmeters (venturi type) which make up part of the helium mass flow measurement chain have been completely designed, and manufacturing is on-going. The behaviour of the helium gas has been studied in detailed thanks to ANSYS CFX software in order to obtain the same differential pressure for all types of flowmeters. Measurement uncertainties have been estimated and the influence of input parameters has been studied. Mechanical calculations have been performed to guarantee the mechanical strength of the venturis required for pressure equipment operating in nuclear environment. In order to complete the helium mass flow measurement chain, different technologies of absolute and differential pressure sensors have been tested in an applied magnetic field to identify equipment compatible with the ITER environment.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Software reliability: Application of a reliability model to requirements error analysis
NASA Technical Reports Server (NTRS)
Logan, J.
1980-01-01
The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.
The Development of Point Doppler Velocimeter Data Acquisition and Processing Software
NASA Technical Reports Server (NTRS)
Cavone, Angelo A.
2008-01-01
In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.
Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S
2012-01-01
Identifying the variability of footprint measurement collection techniques and the reliability of footprint measurements would assist with appropriate clinical foot posture appraisal. We sought to identify relationships between these measures in a healthy population. On 30 healthy participants, midgait dynamic footprint measurements were collected using an ink mat, paper pedography, and electronic pedography. The footprints were then digitized, and the following footprint indices were calculated with photo digital planimetry software: footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index. Differences between techniques were identified with repeated-measures analysis of variance with post hoc test of Scheffe. In addition, to assess practical similarities between the different methods, intraclass correlation coefficients (ICCs) were calculated. To assess intrarater reliability, footprint indices were calculated twice on 10 randomly selected ink mat footprint measurements, and the ICC was calculated. Dynamic footprint measurements collected with an ink mat significantly differed from those collected with paper pedography (ICC, 0.85-0.96) and electronic pedography (ICC, 0.29-0.79), regardless of the practical similarities noted with ICC values (P = .00). Intrarater reliability for dynamic ink mat footprint measurements was high for the footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index (ICC, 0.74-0.99). Footprint measurements collected with various techniques demonstrate differences. Interchangeable use of exact values without adjustment is not advised. Intrarater reliability of a single method (ink mat) was found to be high.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
Omari, Taher I.; Savilampi, Johanna; Kokkinn, Karmen; Schar, Mistyka; Lamvik, Kristin; Doeltgen, Sebastian; Cock, Charles
2016-01-01
Purpose. We evaluated the intra- and interrater agreement and test-retest reliability of analyst derivation of swallow function variables based on repeated high resolution manometry with impedance measurements. Methods. Five subjects swallowed 10 × 10 mL saline on two occasions one week apart producing a database of 100 swallows. Swallows were repeat-analysed by six observers using software. Swallow variables were indicative of contractility, intrabolus pressure, and flow timing. Results. The average intraclass correlation coefficients (ICC) for intra- and interrater comparisons of all variable means showed substantial to excellent agreement (intrarater ICC 0.85–1.00; mean interrater ICC 0.77–1.00). Test-retest results were less reliable. ICC for test-retest comparisons ranged from slight to excellent depending on the class of variable. Contractility variables differed most in terms of test-retest reliability. Amongst contractility variables, UES basal pressure showed excellent test-retest agreement (mean ICC 0.94), measures of UES postrelaxation contractile pressure showed moderate to substantial test-retest agreement (mean Interrater ICC 0.47–0.67), and test-retest agreement of pharyngeal contractile pressure ranged from slight to substantial (mean Interrater ICC 0.15–0.61). Conclusions. Test-retest reliability of HRIM measures depends on the class of variable. Measures of bolus distension pressure and flow timing appear to be more test-retest reliable than measures of contractility. PMID:27190520
Accuracy and reliability of stitched cone-beam computed tomography images.
Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A
2015-03-01
This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.
Standards for Environmental Measurement Using GIS: Toward a Protocol for Protocols.
Forsyth, Ann; Schmitz, Kathryn H; Oakes, Michael; Zimmerman, Jason; Koepp, Joel
2006-02-01
Interdisciplinary research regarding how the built environment influences physical activity has recently increased. Many research projects conducted jointly by public health and environmental design professionals are using geographic information systems (GIS) to objectively measure the built environment. Numerous methodological issues remain, however, and environmental measurements have not been well documented with accepted, common definitions of valid, reliable variables. This paper proposes how to create and document standardized definitions for measures of environmental variables using GIS with the ultimate goal of developing reliable, valid measures. Inherent problems with software and data that hamper environmental measurement can be offset by protocols combining clear conceptual bases with detailed measurement instructions. Examples demonstrate how protocols can more clearly translate concepts into specific measurement. This paper provides a model for developing protocols to allow high quality comparative research on relationships between the environment and physical activity and other outcomes of public health interest.
Troester, Jordan C; Jasmin, Jason G; Duffield, Rob
2018-06-01
The present study examined the inter-trial (within test) and inter-test (between test) reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA). Twenty-four players undertook test - re-test measures on two occasions (7 days apart) on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13%) for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13%) was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%). Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%). Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%). Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union.
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying
2014-03-01
The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.
SenseMyHeart: A cloud service and API for wearable heart monitors.
Pinto Silva, P M; Silva Cunha, J P
2015-01-01
In the era of ubiquitous computing, the growing adoption of wearable systems and body sensor networks is trailing the path for new research and software for cardiovascular intensity, energy expenditure and stress and fatigue detection through cardiovascular monitoring. Several systems have received clinical-certification and provide huge amounts of reliable heart-related data in a continuous basis. PhysioNet provides equally reliable open-source software tools for ECG processing and analysis that can be combined with these devices. However, this software remains difficult to use in a mobile environment and for researchers unfamiliar with Linux-based systems. In the present paper we present an approach that aims at tackling these limitations by developing a cloud service that provides an API for a PhysioNet-based pipeline for ECG processing and Heart Rate Variability measurement. We describe the proposed solution, along with its advantages and tradeoffs. We also present some client tools (windows and Android) and several projects where the developed cloud service has been used successfully as a standard for Heart Rate and Heart Rate Variability studies in different scenarios.
Control System Upgrade for a Mass Property Measurement Facility
NASA Technical Reports Server (NTRS)
Chambers, William; Hinkle, R. Kenneth (Technical Monitor)
2002-01-01
The Mass Property Measurement Facility (MPMF) at the Goddard Space Flight Center has undergone modifications to ensure the safety of Flight Payloads and the measurement facility. The MPMF has been technically updated to improve reliability and increase the accuracy of the measurements. Modifications include the replacement of outdated electronics with a computer based software control system, the addition of a secondary gas supply in case of a catastrophic failure to the gas supply and a motor controlled emergency stopping feature instead of a hard stop.
NASA Astrophysics Data System (ADS)
Dulo, D. A.
Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.
FabricS: A user-friendly, complete and robust software for particle shape-fabric analysis
NASA Astrophysics Data System (ADS)
Moreno Chávez, G.; Castillo Rivera, F.; Sarocchi, D.; Borselli, L.; Rodríguez-Sedano, L. A.
2018-06-01
Shape-fabric is a textural parameter related to the spatial arrangement of elongated particles in geological samples. Its usefulness spans a range from sedimentary petrology to igneous and metamorphic petrology. Independently of the process being studied, when a material flows, the elongated particles are oriented with the major axis in the direction of flow. In sedimentary petrology this information has been used for studies of paleo-flow direction of turbidites, the origin of quartz sediments, and locating ignimbrite vents, among others. In addition to flow direction and its polarity, the method enables flow rheology to be inferred. The use of shape-fabric has been limited due to the difficulties of automatically measuring particles and analyzing them with reliable circular statistics programs. This has dampened interest in the method for a long time. Shape-fabric measurement has increased in popularity since the 1980s thanks to the development of new image analysis techniques and circular statistics software. However, the programs currently available are unreliable, old and are incompatible with newer operating systems, or require programming skills. The goal of our work is to develop a user-friendly program, in the MATLAB environment, with a graphical user interface, that can process images and includes editing functions, and thresholds (elongation and size) for selecting a particle population and analyzing it with reliable circular statistics algorithms. Moreover, the method also has to produce rose diagrams, orientation vectors, and a complete series of statistical parameters. All these requirements are met by our new software. In this paper, we briefly explain the methodology from collection of oriented samples in the field to the minimum number of particles needed to obtain reliable fabric data. We obtained the data using specific statistical tests and taking into account the degree of iso-orientation of the samples and the required degree of reliability. The program has been verified by means of several simulations performed using appropriately designed features and by analyzing real samples.
NASA Technical Reports Server (NTRS)
Fisher, Marcus S.; Northey, Jeffrey; Stanton, William
2014-01-01
The purpose of this presentation is to outline how the NASA Independent Verification and Validation (IVV) Program helps to build reliability into the Space Mission Software Systems (SMSSs) that its customers develop.
ScriptingRT: A Software Library for Collecting Response Latencies in Online Studies of Cognition
Schubert, Thomas W.; Murteira, Carla; Collins, Elizabeth C.; Lopes, Diniz
2013-01-01
ScriptingRT is a new open source tool to collect response latencies in online studies of human cognition. ScriptingRT studies run as Flash applets in enabled browsers. ScriptingRT provides the building blocks of response latency studies, which are then combined with generic Apache Flex programming. Six studies evaluate the performance of ScriptingRT empirically. Studies 1–3 use specialized hardware to measure variance of response time measurement and stimulus presentation timing. Studies 4–6 implement a Stroop paradigm and run it both online and in the laboratory, comparing ScriptingRT to other response latency software. Altogether, the studies show that Flash programs developed in ScriptingRT show a small lag and an increased variance in response latencies. However, this did not significantly influence measured effects: The Stroop effect was reliably replicated in all studies, and the found effects did not depend on the software used. We conclude that ScriptingRT can be used to test response latency effects online. PMID:23805326
Electronic and software systems of an automated portable static mass spectrometer
NASA Astrophysics Data System (ADS)
Chichagov, Yu. V.; Bogdanov, A. A.; Lebedev, D. S.; Kogan, V. T.; Tubol'tsev, Yu. V.; Kozlenok, A. V.; Moroshkin, V. S.; Berezina, A. V.
2017-01-01
The electronic systems of a small high-sensitivity static mass spectrometer and software and hardware tools, which allow one to determine trace concentrations of gases and volatile compounds in air and water samples in real time, have been characterized. These systems and tools have been used to set up the device, control the process of measurement, synchronize this process with accompanying measurements, maintain reliable operation of the device, process the obtained results automatically, and visualize and store them. The developed software and hardware tools allow one to conduct continuous measurements for up to 100 h and provide an opportunity for personnel with no special training to perform maintenance on the device. The test results showed that mobile mass spectrometers for geophysical and medical research, which were fitted with these systems, had a determination limit for target compounds as low as several ppb(m) and a mass resolving power (depending on the current task) as high as 250.
Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott
2015-01-01
The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.
BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.
Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P
2016-03-01
The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Ashnagar, Zinat; Hadian, Mohammad Reza; Olyaei, Gholamreza; Talebian Moghadam, Saeed; Rezasoltani, Asghar; Saeedi, Hassan; Yekaninejad, Mir Saeed; Mahmoodi, Rahimeh
2017-07-01
The aim of this study was to investigate the intratester reliability of digital photographic method for quantifying static lower extremity alignment in individuals with flatfeet and normal feet types. Thirteen females with flexible flatfeet and nine females with normal feet types were recruited from university communities. Reflective markers were attached over the participant's body landmarks. Frontal and sagittal plane photographs were taken while the participants were in a standardized standing position. The markers were removed and after 30 min the same procedure was repeated. Pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length and tibia length were measured from photographs using the Image j software. All measured variables demonstrated good to excellent intratester reliability using digital photography in both flatfeet (ICC: 0.79-0.93) and normal feet type (ICC: 0.84-0.97) groups. The findings of the current study indicate that digital photography is a highly reliable method of measurement for assessing lower extremity alignment in both flatfeet and normal feet type groups. Copyright © 2016. Published by Elsevier Ltd.
Esposito, Stefano Andrea; Huybrechts, Bart; Slagmolen, Pieter; Cotti, Elisabetta; Coucke, Wim; Pauwels, Ruben; Lambrechts, Paul; Jacobs, Reinhilde
2013-09-01
The routine use of high-resolution images derived from 3-dimensional cone-beam computed tomography (CBCT) datasets enables the linear measurement of lesions in the maxillary and mandibular bones on 3 planes of space. Measurements on different planes would make it possible to obtain real volumetric assessments. In this study, we tested, in vitro, the accuracy and reliability of new dedicated software developed for volumetric lesion assessment in clinical endodontics. Twenty-seven bone defects were created around the apices of 8 teeth in 1 young bovine mandible to simulate periapical lesions of different sizes and shapes. The volume of each defect was determined by taking an impression of the defect using a silicone material. The samples were scanned using an Accuitomo 170 CBCT (J. Morita Mfg Co, Kyoto, Japan), and the data were uploaded into a newly developed dedicated software tool. Two endodontists acted as independent and calibrated observers. They analyzed each bone defect for volume. The difference between the direct volumetric measurements and the measurements obtained with the CBCT images was statistically assessed using a lack-of-fit test. A correlation study was undertaken using the Pearson product-moment correlation coefficient. Intra- and interobserver agreement was also evaluated. The results showed a good fit and strong correlation between both volume measurements (ρ > 0.9) with excellent inter- and intraobserver agreement. Using this software, CBCT proved to be a reliable method in vitro for the estimation of endodontic lesion volumes in bovine jaws. Therefore, it may constitute a new, validated technique for the accurate evaluation and follow-up of apical periodontitis. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective
NASA Technical Reports Server (NTRS)
Reinholtz, Kirk
2008-01-01
This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.
Reliability of quantitative EEG (qEEG) measures and LORETA current source density at 30 days.
Cannon, Rex L; Baldwin, Debora R; Shaw, Tiffany L; Diloreto, Dominic J; Phillips, Sherman M; Scruggs, Annie M; Riehl, Timothy C
2012-06-14
There is a growing interest for using quantitative EEG and LORETA current source density in clinical and research settings. Importantly, if these indices are to be employed in clinical settings then the reliability of these measures is of great concern. Neuroguide (Applied Neurosciences) is sophisticated software developed for the analyses of power, and connectivity measures of the EEG as well as LORETA current source density. To date there are relatively few data evaluating topographical EEG reliability contrasts for all 19 channels and no studies have evaluated reliability for LORETA calculations. We obtained 4 min eyes-closed and eyes-opened EEG recordings at 30-day intervals. The EEG was analyzed in Neuroguide and FFT power, coherence and phase was computed for traditional frequency bands (delta, theta, alpha and beta) and LORETA current source density was calculated in 1 Hz increments and summed for total power in eight regions of interest (ROI). In order to obtain a robust measure of reliability we utilized a random effects model with an absolute agreement definition. The results show very good reproducibility for total absolute power and coherence. Phase shows lower reliability coefficients. LORETA current source density shows very good reliability with an average 0.81 for ECB and 0.82 for EOB. Similarly, the eight regions of interest show good to very good agreement across time. Implications for future directions and use of qEEG and LORETA in clinical populations are discussed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
General test plan redundant sensor strapdown IMU evaluation program
NASA Technical Reports Server (NTRS)
Hartwell, T.; Irwin, H. A.; Miyatake, Y.; Wedekind, D. E.
1971-01-01
The general test plan for a redundant sensor strapdown inertial measuring unit evaluation program is presented. The inertial unit contains six gyros and three orthogonal accelerometers. The software incorporates failure detection and correction logic and a land vehicle navigation program. The principal objective of the test is a demonstration of the practicability, reliability, and performance of the inertial measuring unit with failure detection and correction in operational environments.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Simulating flow around scaled model of a hypersonic vehicle in wind tunnel
NASA Astrophysics Data System (ADS)
Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.
2016-11-01
A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.
Software Reliability Issues Concerning Large and Safety Critical Software Systems
NASA Technical Reports Server (NTRS)
Kamel, Khaled; Brown, Barbara
1996-01-01
This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.
Troester, Jordan C.; Jasmin, Jason G.; Duffield, Rob
2018-01-01
The present study examined the inter-trial (within test) and inter-test (between test) reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA). Twenty-four players undertook test – re-test measures on two occasions (7 days apart) on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13%) for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13%) was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%). Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%). Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%). Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union. Key points Single-leg balance demonstrated acceptable inter-trial and inter-test reliability. Single-leg landing demonstrated good inter-trial and inter-test reliability for measures of relative peak landing force and relative impulse, but not time to stabilization. Of the variables investigated, sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing respectively, and should considered for monitoring changes in postural control. PMID:29769817
2013-01-01
Summary of background data Recent smartphones, such as the iPhone, are often equipped with an accelerometer and magnetometer, which, through software applications, can perform various inclinometric functions. Although these applications are intended for recreational use, they have the potential to measure and quantify range of motion. The purpose of this study was to estimate the intra and inter-rater reliability as well as the criterion validity of the clinometer and compass applications of the iPhone in the assessment cervical range of motion in healthy participants. Methods The sample consisted of 28 healthy participants. Two examiners measured cervical range of motion of each participant twice using the iPhone (for the estimation of intra and inter-reliability) and once with the CROM (for the estimation of criterion validity). Estimates of reliability and validity were then established using the intraclass correlation coefficient (ICC). Results We observed a moderate intra-rater reliability for each movement (ICC = 0.65-0.85) but a poor inter-rater reliability (ICC < 0.60). For the criterion validity, the ICCs are moderate (>0.50) to good (>0.65) for movements of flexion, extension, lateral flexions and right rotation, but poor (<0.50) for the movement left rotation. Conclusion We found good intra-rater reliability and lower inter-rater reliability. When compared to the gold standard, these applications showed moderate to good validity. However, before using the iPhone as an outcome measure in clinical settings, studies should be done on patients presenting with cervical problems. PMID:23829201
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.
Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There aremore » two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie Johnson, and Harold Sanborn of the U.S. Army Corps of Engineers Construction Engineering Research Laboratory * Colleagues from Sandia National Laboratories (SNL) for their reviews, suggestions, and participation in the work.« less
Development of a Standard Set of Software Indicators for Aeronautical Systems Center.
1992-09-01
29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-07-04
BACKGROUND The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). MATERIAL AND METHODS 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch's t-test and the Wilcoxon test. RESULTS There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. CONCLUSIONS PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-01-01
Background The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). Material/Methods 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch’s t-test and the Wilcoxon test. Results There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. Conclusions PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC. PMID:28674379
Farah, Ra'fat I
2016-01-01
The objectives of this in vitro study were: 1) to test the agreement among color coordinate differences and total color difference (ΔL*, ΔC*, Δh°, and ΔE) measurements obtained by digital image analysis (DIA) and spectrophotometer, and 2) to test the reliability of each method for obtaining color differences. A digital camera was used to record standardized images of each of the 15 shade tabs from the IPS e.max shade guide placed edge-to-edge in a phantom head with a reference shade tab. The images were analyzed using image-editing software (Adobe Photoshop) to obtain the color differences between the middle area of each test shade tab and the corresponding area of the reference tab. The color differences for the same shade tab areas were also measured using a spectrophotometer. To assess the reliability, measurements for the 15 shade tabs were repeated twice using the two methods. The Intraclass Correlation Coefficient (ICC) and the Dahlberg index were used to calculate agreement and reliability. The total agreement of the two methods for measuring ΔL*, ΔC*, Δh°, and ΔE, according to the ICC, exceeded 0.82. The Dahlberg indices for ΔL* and ΔE were 2.18 and 2.98, respectively. For the reliability calculation, the ICCs for the DIA and the spectrophotometer ΔE were 0.91 and 0.94, respectively. High agreement was obtained between the DIA and spectrophotometer results for the ΔL*, ΔC*, Δh°, and ΔE measurements. Further, the reliability of the measurements for the spectrophotometer was slightly higher than the reliability of all measurements in the DIA.
NASA Technical Reports Server (NTRS)
Orr, James K.
2010-01-01
This presentation focuses on the Space Shuttle Primary Avionics Software System (PASS) and the people who developed and maintained this system. One theme is to provide quantitative data on software quality and reliability over a 30 year period. Consistent data relates to code break discrepancies. Requirements were supplied from external sources. Requirement inspections and measurements not implemented until later, beginning in 1985. Second theme is to focus on the people and organization of PASS. Many individuals have supported the PASS project over the entire period while transitioning from company to company and contract to contract. Major events and transitions have impacted morale (both positively and negatively) across the life of the project.
Effectiveness of back-to-back testing
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.
1987-01-01
Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.
Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.
1981-06-01
Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok
The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less
Non-contact ulcer area calculation system for neuropathic foot ulcer.
Shah, Parth; Mahajan, Siddaram; Nageswaran, Sharmila; Paul, Sathish Kumar; Ebenzer, Mannam
2017-08-11
Around 125,785 new cases in year 2013-14 of leprosy were detected in India as per WHO report on leprosy in September 2015 which accounts to approximately 62% of the total new cases. Anaesthetic foot caused by leprosy leads to uneven loading of foot leading to ulcer in approximately 20% of the cases. Much efforts have gone in identifying newer techniques to efficiently monitor the progress of ulcer healing. Current techniques followed in measuring the size of ulcers, have not been found to be so accurate but are still is followed by clinicians across the globe. Quantification of prognosis of the condition would be required to understand the efficacy of current treatment methods and plan for further treatment. This study aims at developing a non contact technique to precisely measure the size of ulcer in patients affected by leprosy. Using MATLAB software, GUI was designed to process the acquired ulcer image by segmenting and calculating the pixel area of the image. The image was further converted to a standard measurement using a reference object. The developed technique was tested on 16 ulcer images acquired from 10 leprosy patients with plantar ulcers. Statistical analysis was done using MedCalc analysis software to find the reliability of the system. The analysis showed a very high correlation coefficient (r=0.9882) between the ulcer area measurements done using traditional technique and the newly developed technique, The reliability of the newly developed technique was significant with a significance level of 99.9%. The designed non-contact ulcer area calculating system using MATLAB is found to be a reliable system in calculating the size of ulcers. The technique would help clinicians have a reliable tool to monitor the progress of ulcer healing and help modify the treatment protocol if needed. Copyright © 2017 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.
2015-01-01
Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182
Santiago, Gabriel F; Susarla, Srinivas M; Al Rakan, Mohammed; Coon, Devin; Rada, Erin M; Sarhane, Karim A; Shores, Jamie T; Bonawitz, Steven C; Cooney, Damon; Sacks, Justin; Murphy, Ryan J; Fishman, Elliot K; Brandacher, Gerald; Lee, W P Andrew; Liacouras, Peter; Grant, Gerald; Armand, Mehran; Gordon, Chad R
2014-05-01
Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Space Shuttle Software Development and Certification
NASA Technical Reports Server (NTRS)
Orr, James K.; Henderson, Johnnie A
2000-01-01
Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools
Automatic Speech Recognition: Reliability and Pedagogical Implications for Teaching Pronunciation
ERIC Educational Resources Information Center
Kim, In-Seok
2006-01-01
This study examines the reliability of automatic speech recognition (ASR) software used to teach English pronunciation, focusing on one particular piece of software, "FluSpeak, as a typical example." Thirty-six Korean English as a Foreign Language (EFL) college students participated in an experiment in which they listened to 15 sentences…
A Survey of Software Reliability Modeling and Estimation
1983-09-01
considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980
NASA Astrophysics Data System (ADS)
Yusmaita, E.; Nasra, Edi
2018-04-01
This research aims to produce instrument for measuring chemical literacy assessment in basic chemistry courses with solubility topic. The construction of this measuring instrument is adapted to the PISA (Programme for International Student Assessment) problem’s characteristics and the Syllaby of Basic Chemistry in KKNI-IndonesianNational Qualification Framework. The PISA is a cross-country study conducted periodically to monitor the outcomes of learners' achievement in each participating country. So far, studies conducted by PISA include reading literacy, mathematic literacy and scientific literacy. Refered to the scientific competence of the PISA study on science literacy, an assessment designed to measure the chemical literacy of the chemistry department’s students in UNP. The research model used is MER (Model of Educational Reconstruction). The validity and reliability values of discourse questions is measured using the software ANATES. Based on the acquisition of these values is obtained a valid and reliable chemical literacy questions.There are seven question items limited response on the topic of solubility with valid category, the acquisition value of test reliability is 0,86, and has a difficulty index and distinguishing good
Software service history report
DOT National Transportation Integrated Search
2002-01-01
The safe and reliable operation of software within civil aviation systems and equipment has historically been assured through the application of rigorous design assurance applied during the software development process. Increasingly, manufacturers ar...
Van der Elst, Wim; Molenberghs, Geert; Hilgers, Ralf-Dieter; Verbeke, Geert; Heussen, Nicole
2016-11-01
There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra-class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and flexible modeling approach that allows for the derivation of reliability estimates, standard errors, and confidence intervals - appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Reproducibility of sonographic measurement of thickness and echogenicity of the plantar fascia.
Cheng, Ju-Wen; Tsai, Wen-Chung; Yu, Tung-Yang; Huang, Kuo-Yao
2012-01-01
To evaluate the intra- and interrater reliability of ultrasonographic measurements of the thickness and echogenicity of the plantar fascia. Eleven patients (20 feet), who complained of inferior heel pain, and 26 volunteers (52 feet) were enrolled. Two sonographers independently imaged the plantar fascia in both longitudinal and transverse planes. Volunteers were assessed twice to evaluate intrarater reliability. Quantitative evaluation of the echogenicity of the plantar fascia was performed by measuring the mean gray level of the region of interest using Digital Imaging and Communications in Medicine viewer software. Sonographic evaluation of the thickness of the plantar fascia showed high reliability. Sonographic evaluations of the presence or absence of hypoechoic change in the plantar fascia showed surprisingly low agreement. The reliability of gray-scale evaluations appears to be much better than subjective judgments in the evaluation of echogenicity. Transverse scanning did not show any advantage in sonographic evaluation of the plantar fascia. The reliability of sonographic examination of the thickness of the plantar fascia is high. Mean gray-level analysis of quantitative sonography can be used for the evaluation of echogenicity, which could reduce discrepancies in the interpretation of echogenicity by different sonographers. Longitudinal instead of transverse scanning is recommended for imaging the plantar fascia. Copyright © 2011 Wiley Periodicals, Inc.
Analysis of the reliability and reproducibility of goniometry compared to hand photogrammetry
de Carvalho, Rosana Martins Ferreira; Mazzer, Nilton; Barbieri, Claudio Henrique
2012-01-01
Objective: To evaluate the intra- and inter-examiner reliability and reproducibility of goniometry in relation to photogrammetry of hand, comparing the angles of thumb abduction, PIP joint flexion of the II finger and MCP joint flexion of the V finger. Methods: The study included 30 volunteers, who were divided into three groups: one group of 10 physiotherapy students, one group of 10 physiotherapists, and a third group of 10 therapists of the hand. Each examiner performed the measurements on the same hand mold, using the goniometer followed by two photogrammetry software programs; CorelDraw® and ALCimagem®. Results: The results revealed that the groups and the methods proposed presented inter-examiner reliability, generally rated as excellent (ICC 0.998 I.C. 95% 0.995 - 0.999). In the intra-examiner evaluation, an excellent level of reliability was found between the three groups. In the comparison between groups for each angle and each method, no significant differences were found between the groups for most of the measurements. Conclusion: Goniometry and photogrammetry are reliable and reproducible methods for evaluating measurements of the hand. However, due to the lack of similar references, detailed studies are needed to define the normal parameters between the methods in the joints of the hand. Level of Evidence II, Diagnostic Study. PMID:24453594
Reliability, Safety and Error Recovery for Advanced Control Software
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2003-01-01
For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Comparison of Eyemetrics and Orbscan automated method to determine horizontal corneal diameter
Venkataraman, Arvind; Mardi, Sapna K; Pillai, Sarita
2010-01-01
Purpose: To compare horizontal corneal diameter measurements using the Orbscan Eyemetrics function and Orbscan corneal topographer. Materials and Methods: Seventy-three eyes of 37 patients were included in the study. In all cases, the automated white-to-white (WTW) measurements were obtained using Orbscan by two observers. Using the Eyemetrics function, the WTW was measured manually by the same observers from limbus to limbus using the digital caliper passing through the five point corneal reflections on the Orbscan real image. The data was analyzed using SPSS software for correlation, reliability and inter-rater repeatability. Results: The mean horizontal corneal diameter was 11.74 ± 0.32mm (SD) with the Orbscan and 11.92 ± 0.33mm (SD) with Eyemetrics Software-based measurement. A good positive correlation (Spearman r = 0.720, P = 0.026) was found between these two measurements. The coefficient of inter-rater repeatability was 0.89 for the Orbscan and 0.94 for the Eyemetrics software measurements on the anterior segment images. The Bland and Altman analysis showed large limits of agreement between Orbscan WTW and Eyemetrics WTW measurements. The intra-session repeatability scores for repeat measurements for the Orbscan WTW and Eyemetrics measurements were good. Conclusion: Eyemetrics can be used to measure WTW and the Eyemetrics measured WTW was longer than the WTW measured by Orbscan. PMID:20413925
Digital echocardiography 2002: now is the time
NASA Technical Reports Server (NTRS)
Thomas, James D.; Greenberg, Neil L.; Garcia, Mario J.
2002-01-01
The ability to acquire echocardiographic images digitally, store and transfer these data using the DICOM standard, and routinely analyze examinations exists today and allows the implementation of a digital echocardiography laboratory. The purpose of this review article is to outline the critical components of a digital echocardiography laboratory, discuss general strategies for implementation, and put forth some of the pitfalls that we have encountered in our own implementation. The major components of the digital laboratory include (1) digital echocardiography machines with network output, (2) a switched high-speed network, (3) a high throughput server with abundant local storage, (4) a reliable low-cost archive, (5) software to manage information, and (6) support mechanisms for software and hardware. Implementation strategies can vary from a complete vendor solution providing all components (hardware, software, support), to a strategy similar to our own where standard computer and networking hardware are used with specialized software for management of image and measurement information.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
NASA Technical Reports Server (NTRS)
Quintana, Rolando
2003-01-01
The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2016-01-01
To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1992-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.
Saad, Karen Ruggeri; Colombo, Alexandra Siqueira; Ribeiro, Ana Paula; João, Sílvia Maria Amado
2012-04-01
The purpose of this study was to investigate the reliability of photogrammetry in the measurement of the postural deviations in individuals with idiopathic scoliosis. Twenty participants with scoliosis (17 women and three men), with a mean age of 23.1 ± 9 yrs, were photographed from the posterior and lateral views. The postural aspects were measured with CorelDRAW software. High inter-rater and test-retest reliability indices were found. It was observed that with more severity of scoliosis, greater were the variations between the thoracic kyphosis and lumbar lordosis measures obtained by the same examiner from the left lateral view photographs. A greater body mass index (BMI) was associated with greater variability of the trunk rotation measures obtained by two independent examiners from the right, lateral view (r = 0.656; p = 0.002). The severity of scoliosis was also associated with greater inter-rater variability measures of trunk rotation obtained from the left, lateral view (r = 0.483; p = 0.036). Photogrammetry demonstrated to be a reliable method for the measurement of postural deviations from the posterior and lateral views of individuals with idiopathic scoliosis and could be complementarily employed for the assessment procedures, which could reduce the number of X-rays used for the follow-up assessments of these individuals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluation of the efficiency and reliability of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1994-01-01
There are numerous studies which show that CASE Tools greatly facilitate software development. As a result of these advantages, an increasing amount of software development is done with CASE Tools. As more software engineers become proficient with these tools, their experience and feedback lead to further development with the tools themselves. What has not been widely studied, however, is the reliability and efficiency of the actual code produced by the CASE Tools. This investigation considered these matters. Three segments of code generated by MATRIXx, one of many commercially available CASE Tools, were chosen for analysis: ETOFLIGHT, a portion of the Earth to Orbit Flight software, and ECLSS and PFMC, modules for Environmental Control and Life Support System and Pump Fan Motor Control, respectively.
NASA software specification and evaluation system design, part 2
NASA Technical Reports Server (NTRS)
1976-01-01
A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.
Effective Software Engineering Leadership for Development Programs
ERIC Educational Resources Information Center
Cagle West, Marsha
2010-01-01
Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…
Grooten, Wilhelmus Johannes Andreas; Sandberg, Lisa; Ressman, John; Diamantoglou, Nicolas; Johansson, Elin; Rasmussen-Barr, Eva
2018-01-08
Clinical examinations are subjective and often show a low validity and reliability. Objective and highly reliable quantitative assessments are available in laboratory settings using 3D motion analysis, but these systems are too expensive to use for simple clinical examinations. Qinematic™ is an interactive movement analyses system based on the Kinect camera and is an easy-to-use clinical measurement system for assessing posture, balance and side-bending. The aim of the study was to test the test-retest the reliability and construct validity of Qinematic™ in a healthy population, and to calculate the minimal clinical differences for the variables of interest. A further aim was to identify the discriminative validity of Qinematic™ in people with low-back pain (LBP). We performed a test-retest reliability study (n = 37) with around 1 week between the occasions, a construct validity study (n = 30) in which Qinematic™ was tested against a 3D motion capture system, and a discriminative validity study, in which a group of people with LBP (n = 20) was compared to healthy controls (n = 17). We tested a large range of psychometric properties of 18 variables in three sections: posture (head and pelvic position, weight distribution), balance (sway area and velocity in single- and double-leg stance), and side-bending. The majority of the variables in the posture and balance sections, showed poor/fair reliability (ICC < 0.4) and poor/fair validity (Spearman <0.4), with significant differences between occasions, between Qinematic™ and the 3D-motion capture system. In the clinical study, Qinematic™ did not differ between people with LPB and healthy for these variables. For one variable, side-bending to the left, there was excellent reliability (ICC =0.898), excellent validity (r = 0.943), and Qinematic™ could differentiate between LPB and healthy individuals (p = 0.012). This paper shows that a novel software program (Qinematic™) based on the Kinect camera for measuring balance, posture and side-bending has poor psychometric properties, indicating that the variables on balance and posture should not be used for monitoring individual changes over time or in research. Future research on the dynamic tasks of Qinematic™ is warranted.
Rasch analysis of Stamps's Index of Work Satisfaction in nursing population.
Ahmad, Nora; Oranye, Nelson Ositadimma; Danilov, Alyona
2017-01-01
One of the most commonly used tools for measuring job satisfaction in nursing is the Stamps Index of Work Satisfaction. Several studies have reported on the reliability of the Stamps' tool based on traditional statistical model. The aim of this study was to apply the Rasch model to examine the adequacy of Stamps's Index of Work Satisfaction for measuring nurses' job satisfaction cross-culturally and to determine the validity and reliability of the instrument using the Rasch criteria. A secondary data analysis was conducted on a sample of 556 registered nurses from two countries. The RUMM 2030 software was used to analyse the psychometric properties of the Index of Work Satisfaction. The persons mean location of -0.018 approximated the items mean of 0.00, suggesting a good alignment of the measure and the traits being measured. However, at the items level, some items were misfiting to the Rasch model.
NASA Astrophysics Data System (ADS)
Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens
2015-04-01
Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
A Bayesian modification to the Jelinski-Moranda software reliability growth model
NASA Technical Reports Server (NTRS)
Littlewood, B.; Sofer, A.
1983-01-01
The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.
ImageJ: A Free, Easy, and Reliable Method to Measure Leg Ulcers Using Digital Pictures.
Aragón-Sánchez, Javier; Quintana-Marrero, Yurena; Aragón-Hernández, Cristina; Hernández-Herero, María José
2017-12-01
Wound measurement to document the healing course of chronic leg ulcers has an important role in the management of these patients. Digital cameras in smartphones are readily available and easy to use, and taking pictures of wounds is becoming a routine in specialized departments. Analyzing digital pictures with appropriate software provides clinicians a quick, clean, and easy-to-use tool for measuring wound area. A set of 25 digital pictures of plain foot and leg ulcers was the basis of this study. Photographs were taken placing a ruler next to the wound in parallel with the healthy skin with the iPhone 6S (Apple Inc, Cupertino, CA), which has a camera of 12 megapixels using the flash. The digital photographs were visualized with ImageJ 1.45s freeware (National Institutes of Health, Rockville, MD; http://imagej.net/ImageJ ). Wound area measurement was carried out by 4 raters: head of the department, wound care nurse, physician, and medical student. We assessed intra- and interrater reliability using the interclass correlation coefficient. To determine intraobserver reliability, 2 of the raters repeated the measurement of the set 1 week after the first reading. The interrater model displayed an interclass correlation coefficient of 0.99 with 95% confidence interval of 0.999 to 1.000, showing excellent reliability. The intrarater model of both examiners showed excellent reliability. In conclusion, analyzing digital images of leg ulcers with ImageJ estimates wound area with excellent reliability. This method provides a free, rapid, and accurate way to measure wounds and could routinely be used to document wound healing in daily clinical practice.
Rey-Martinez, Jorge; Pérez-Fernández, Nicolás
2016-12-01
The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
Modal Analysis for Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
HALO--a Java framework for precise transcript half-life determination.
Friedel, Caroline C; Kaufmann, Stefanie; Dölken, Lars; Zimmer, Ralf
2010-05-01
Recent improvements in experimental technologies now allow measurements of de novo transcription and/or RNA decay at whole transcriptome level and determination of precise transcript half-lives. Such transcript half-lives provide important insights into the regulation of biological processes and the relative contributions of RNA decay and de novo transcription to differential gene expression. In this article, we present HALO (Half-life Organizer), the first software for the precise determination of transcript half-lives from measurements of RNA de novo transcription or decay determined with microarrays or RNA-seq. In addition, methods for quality control, filtering and normalization are supplied. HALO provides a graphical user interface, command-line tools and a well-documented Java application programming interface (API). Thus, it can be used both by biologists to determine transcript half-lives fast and reliably with the provided user interfaces as well as software developers integrating transcript half-life analysis into other gene expression profiling pipelines. Source code, executables and documentation are available at http://www.bio.ifi.lmu.de/software/halo.
Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J
2018-05-01
Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
[Determination of radioactivity by smartphones].
Hartmann, H; Freudenberg, R; Andreeff, M; Kotzerke, J
2013-01-01
The interest in the detection of radioactive materials has strongly increased after the accident in the nuclear power plant Fukushima and has led to a bottleneck of suitable measuring instruments. Smartphones equipped with a commercially available software tool could be used for dose rate measurements following a calibration according to the specific camera module. We examined whether such measurements provide reliable data for typical activities and radionuclides in nuclear medicine. For the nuclides 99mTc (10 - 1000 MBq), 131I (3.7 - 1800 MBq, therapy capsule) and 68Ga (50 - 600 MBq) radioactivity with defined geometry in different distances was measured. The smartphones Milestone Droid 1 (Motorola) and HTC Desire (HTC Corporation) were compared with the standard instruments AD6 (automess) and DoseGUARD (AEA Technology). Measurements with the smartphones and the other devices show a good agreement: linear signal increase with rising activity and dose rate. The long time measurement (131I, 729 MBq, 0.5 m, 60 min) demonstrates a considerably higher variation (by 20%) of the measured smartphone data values compared with the AD6. For low dose rates (< 1 µGy/h), the sensitivity decreases so that measurements of e. g. the natural radiation exposure do not lead to valid results. The calibration of the camera responsivity for the smartphone has a big influence on the results caused by the small detector surface of the camera semiconductor. With commercial software the camera module of a smartphone can be used for the measurement of radioactivity. Dose rates resulting from typical nuclear medicine procedures can be measured reliably (e. g., dismissal dose after radioiodine therapy). The signal shows a high correlation to measured values of conventional dose measurement devices.
Assessing Survivability Using Software Fault Injection
2001-04-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10875 TITLE: Assessing Survivability Using Software Fault Injection...Esc to exit .......................................................................... = 11-1 Assessing Survivability Using Software Fault Injection...Jeffrey Voas Reliable Software Technologies 21351 Ridgetop Circle, #400 Dulles, VA 20166 jmvoas@rstcorp.crom Abstract approved sources have the
Farris, Dominic James; Lichtwark, Glen A
2016-05-01
Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, "UltraTrack". UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bian, Lin
2012-01-01
In clinical practice, hearing thresholds are measured at only five to six frequencies at octave intervals. Thus, the audiometric configuration cannot closely reflect the actual status of the auditory structures. In addition, differential diagnosis requires quantitative comparison of behavioral thresholds with physiological measures, such as otoacoustic emissions (OAEs) that are usually measured in higher resolution. The purpose of this research was to develop a method to improve the frequency resolution of the audiogram. A repeated-measure design was used in the study to evaluate the reliability of the threshold measurements. A total of 16 participants with clinically normal hearing and mild hearing loss were recruited from a population of university students. No intervention was involved in the study. Custom developed system and software were used for threshold acquisition with quality control (QC). With real-ear calibration and monitoring of test signals, the system provided accurate and individualized measure of hearing thresholds that were determined by an analysis based on signal detection theory (SDT). The reliability of the threshold measure was assessed by correlation and differences between the repeated measures. The audiometric configurations were diverse and unique to each individual ear. The accuracy, within-subject reliability, and between-test repeatability are relatively high. With QC, the high-resolution audiograms can be reliably and accurately measured. Hearing thresholds measured as ear canal sound pressures with higher frequency resolution can provide more customized hearing-aid fitting. The test system may be integrated with other physiological measures, such as OAEs, into a comprehensive evaluative tool. American Academy of Audiology.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735
Derrick, Sharon M; Raxter, Michelle H; Hipp, John A; Goel, Priya; Chan, Elaine F; Love, Jennifer C; Wiersema, Jason M; Akella, N Shastry
2015-01-01
Medical examiners and coroners (ME/C) in the United States hold statutory responsibility to identify deceased individuals who fall under their jurisdiction. The computer-assisted decedent identification (CADI) project was designed to modify software used in diagnosis and treatment of spinal injuries into a mathematically validated tool for ME/C identification of fleshed decedents. CADI software analyzes the shapes of targeted vertebral bodies imaged in an array of standard radiographs and quantifies the likelihood that any two of the radiographs contain matching vertebral bodies. Six validation tests measured the repeatability, reliability, and sensitivity of the method, and the effects of age, sex, and number of radiographs in array composition. CADI returned a 92-100% success rate in identifying the true matching pair of vertebrae within arrays of five to 30 radiographs. Further development of CADI is expected to produce a novel identification method for use in ME/C offices that is reliable, timely, and cost-effective. © 2014 American Academy of Forensic Sciences.
Software IV and V Research Priorities and Applied Program Accomplishments Within NASA
NASA Technical Reports Server (NTRS)
Blazy, Louis J.
2000-01-01
The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering
Analyzing Responses of Chemical Sensor Arrays
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
NASA is developing a third-generation electronic nose (ENose) capable of continuous monitoring of the International Space Station s cabin atmosphere for specific, harmful airborne contaminants. Previous generations of the ENose have been described in prior NASA Tech Briefs issues. Sensor selection is critical in both (prefabrication) sensor material selection and (post-fabrication) data analysis of the ENose, which detects several analytes that are difficult to detect, or that are at very low concentration ranges. Existing sensor selection approaches usually include limited statistical measures, where selectivity is more important but reliability and sensitivity are not of concern. When reliability and sensitivity can be major limiting factors in detecting target compounds reliably, the existing approach is not able to provide meaningful selection that will actually improve data analysis results. The approach and software reported here consider more statistical measures (factors) than existing approaches for a similar purpose. The result is a more balanced and robust sensor selection from a less than ideal sensor array. The software offers quick, flexible, optimal sensor selection and weighting for a variety of purposes without a time-consuming, iterative search by performing sensor calibrations to a known linear or nonlinear model, evaluating the individual sensor s statistics, scoring the individual sensor s overall performance, finding the best sensor array size to maximize class separation, finding optimal weights for the remaining sensor array, estimating limits of detection for the target compounds, evaluating fingerprint distance between group pairs, and finding the best event-detecting sensors.
Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G
2018-03-01
Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects
ERIC Educational Resources Information Center
Buffardi, Kevin John
2014-01-01
Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…
NASA Technical Reports Server (NTRS)
Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)
2001-01-01
The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.
A general software reliability process simulation technique
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1991-01-01
The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.
Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat
2018-01-01
We applied a new approach to Generalizability theory (G-theory) involving parallel splits and repeated measures to evaluate common uses of the Paulhus Deception Scales based on polytomous and four types of dichotomous scoring. G-theory indices of reliability and validity accounting for specific-factor, transient, and random-response measurement error supported use of polytomous over dichotomous scores as contamination checks; as control, explanatory, and outcome variables; as aspects of construct validation; and as indexes of environmental effects on socially desirable responding. Polytomous scoring also provided results for flagging faking as dependable as those when using dichotomous scoring methods. These findings argue strongly against the nearly exclusive use of dichotomous scoring for the Paulhus Deception Scales in practice and underscore the value of G-theory in demonstrating this. We provide guidelines for applying our G-theory techniques to other objectively scored clinical assessments, for using G-theory to estimate how changes to a measure might improve reliability, and for obtaining software to conduct G-theory analyses free of charge.
SmartGrain: high-throughput phenotyping software for measuring seed shape through image analysis.
Tanabata, Takanari; Shibaya, Taeko; Hori, Kiyosumi; Ebana, Kaworu; Yano, Masahiro
2012-12-01
Seed shape and size are among the most important agronomic traits because they affect yield and market price. To obtain accurate seed size data, a large number of measurements are needed because there is little difference in size among seeds from one plant. To promote genetic analysis and selection for seed shape in plant breeding, efficient, reliable, high-throughput seed phenotyping methods are required. We developed SmartGrain software for high-throughput measurement of seed shape. This software uses a new image analysis method to reduce the time taken in the preparation of seeds and in image capture. Outlines of seeds are automatically recognized from digital images, and several shape parameters, such as seed length, width, area, and perimeter length, are calculated. To validate the software, we performed a quantitative trait locus (QTL) analysis for rice (Oryza sativa) seed shape using backcrossed inbred lines derived from a cross between japonica cultivars Koshihikari and Nipponbare, which showed small differences in seed shape. SmartGrain removed areas of awns and pedicels automatically, and several QTLs were detected for six shape parameters. The allelic effect of a QTL for seed length detected on chromosome 11 was confirmed in advanced backcross progeny; the cv Nipponbare allele increased seed length and, thus, seed weight. High-throughput measurement with SmartGrain reduced sampling error and made it possible to distinguish between lines with small differences in seed shape. SmartGrain could accurately recognize seed not only of rice but also of several other species, including Arabidopsis (Arabidopsis thaliana). The software is free to researchers.
Advances in the production of freeform optical surfaces
NASA Astrophysics Data System (ADS)
Tohme, Yazid E.; Luniya, Suneet S.
2007-05-01
Recent market demands for free-form optics have challenged the industry to find new methods and techniques to manufacture free-form optical surfaces with a high level of accuracy and reliability. Production techniques are becoming a mix of multi-axis single point diamond machining centers or deterministic ultra precision grinding centers coupled with capable measurement systems to accomplish the task. It has been determined that a complex software tool is required to seamlessly integrate all aspects of the manufacturing process chain. Advances in computational power and improved performance of computer controlled precision machinery have driven the use of such software programs to measure, visualize, analyze, produce and re-validate the 3D free-form design thus making the process of manufacturing such complex surfaces a viable task. Consolidation of the entire production cycle in a comprehensive software tool that can interact with all systems in design, production and measurement phase will enable manufacturers to solve these complex challenges providing improved product quality, simplified processes, and enhanced performance. The work being presented describes the latest advancements in developing such software package for the entire fabrication process chain for aspheric and free-form shapes. It applies a rational B-spline based kernel to transform an optical design in the form of parametrical definition (optical equation), standard CAD format, or a cloud of points to a central format that drives the simulation. This software tool creates a closed loop for the fabrication process chain. It integrates surface analysis and compensation, tool path generation, and measurement analysis in one package.
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Rainsford, M; Palmer, M A; Paine, G
2018-04-01
Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.
Hierarchical specification of the SIFT fault tolerant flight control system
NASA Technical Reports Server (NTRS)
Melliar-Smith, P. M.; Schwartz, R. L.
1981-01-01
The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.
Kanter, Rebecca; Alvey, Jeniece; Fuentes, Deborah
2014-09-01
Consumer nutrition environment measures are important to understanding the food environment, which affects individual dietary intake. A nutrition environment measures survey for supermarkets (NEMS-S) has been designed on paper for use in Guatemala. However, a paper survey is not an inconspicuous data collection method. To design, pilot test, and validate the Guatemala NEMS-S in the form of a mobile phone application (mobile app). CommCare, a free and open-source software application, was used to design the NEMS-S for Guatemala in the form of a mobile app. Two raters tested the mobile app in a single Guatemalan supermarket. Both the interrater and the test-retest reliability of the mobile app were determined using percent agreement and Cohen's kappa score and compared with the interrater and test-retest reliability of the paper version. Interrater reliability was very high between the paper survey and the mobile app (Cohen's kappa > 0.90). Test-retest reliability ranged from kappa 0.78 to 0.91. Between two certified NEMS-S raters, survey completion time using the mobile app was 5 minutes less than that with the paper form (35 vs. 40 minutes). The NEMS-S mobile app provides for more rapid data collection, with equivalent reliability and validity to the NEMS-S paper version, with advantages over a paper-based survey of multiple language capability and concomitant data entry.
Test-retest reliability of posture measurements in adolescents with idiopathic scoliosis.
Heitz, Pierre-Henri; Aubin-Fournier, Jean-François; Parent, Éric; Fortin, Carole
2018-05-07
Posture changes are a major consequence of IS (IS). Posture changes can lead to psychosocial and physical impairments in adolescents with IS. Therefore, it is important to assess posture but the test-retest reliability of posture measurements still remains unknown in this population. The primary objective was to determine the test-retest reliability of 25 head and trunk posture indices using the Clinical Photographic Postural Assessment Tool (CPPAT) in adolescents with IS. The secondary objective was to determine the standard error of measurement and the minimal detectable change. This is a prospective test-retest reliability study carried out at two tertiary university hospital centers. Forty-one adolescents with IS, aged 10 to 16 years old with curves 10 to 45 o and treated non-operatively were recruited. Two posture assessments were done using the CPPAT five to 10 days apart following a standardized procedure. Photographs were analyzed with the CPPAT software by digitizing reference landmarks placed on the participant by a physiotherapist evaluator. Generalizability theory was used to obtain a coefficient of dependability, standard error of measurement and the minimal detectable change at the 90% confidence interval. This project was supported by the Canadian Pediatric Spine Society (CPSS: 10000$). There is no study-specific conflicts of interest-associated biases. Fourteen of 25 posture indices had a good reliability (ϕ ≥ 0.78), ten of 25 had moderate reliability (ϕ = 0.55 to 0.74) and one had poor reliability (ϕ = 0.45). The most reliable posture indices were waist angles asymmetry (ϕ = 0.93), right waist angle (ϕ = 0.91) and frontal trunk list (ϕ = 0.92). Right sagittal trunk list was the least reliable posture index (ϕ = 0.45). The MDC 90 values ranged from 2.6 to 10.3° for angular measurements and from 8.4 to 35.1 mm for linear measurements. This study demonstrates that most posture indices, especially the trunk posture indices, are reproducible in time among adolescents with IS and provides reference values. Clinicians and researchers can use these reference values in order to assess change in posture over time attributable to treatment effectiveness. Copyright © 2018. Published by Elsevier Inc.
The cost of software fault tolerance
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1982-01-01
The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.
NASA Technical Reports Server (NTRS)
1989-01-01
The report is a summary of the 5th NASA/Contractors Conference on Quality and Productivity. The theme was 'Quality - A Commitment to the Future'. The summary report highlights the key points: commitment to quality, strategic and long-range planning, quality commitment, risk management, teaming, quality measurement, creating a quality environment, contract incentives, software quality and reliability.
Developing Decision-Making Skills Using Immersive VR
2013-06-14
Institution: Department of Otolaryngology Mailing Address: Level 2, Royal Victorian Eye and Ear Hospital, 32, Gisborne St, East Melbourne...reliability of this measure. We will also intend to integrate other data models into to the feedback system such as Pattern based models [8], and... Pattern -Based Real-Time Feedback for a Temporal Bone Simulator’, Proc. of the 19th ACM Symposium on Virtual Reality Software and Technology, 2013
Evaluating a technical university's placement test using the Rasch measurement model
NASA Astrophysics Data System (ADS)
Salleh, Tuan Salwani; Bakri, Norhayati; Zin, Zalhan Mohd
2016-10-01
This study discusses the process of validating a mathematics placement test at a technical university. The main objective is to produce a valid and reliable test to measure students' prerequisite knowledge to learn engineering technology mathematics. It is crucial to have a valid and reliable test as the results will be used in a critical decision making to assign students into different groups of Technical Mathematics 1. The placement test which consists of 50 mathematics questions were tested on 82 new diplomas in engineering technology students at a technical university. This study employed rasch measurement model to analyze the data through the Winsteps software. The results revealed that there are ten test questions lower than less able students' ability. Nevertheless, all the ten questions satisfied infit and outfit standard values. Thus, all the questions can be reused in the future placement test at the technical university.
A Compatible Hardware/Software Reliability Prediction Model.
1981-07-22
machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed
ERIC Educational Resources Information Center
Fitzgibbons, Megan; Meert, Deborah
2010-01-01
The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…
Changes in Pelvic Incidence, Pelvic Tilt, and Sacral Slope in Situations of Pelvic Rotation.
Jin, Hai-Ming; Xu, Dao-Liang; Xuan, Jun; Chen, Jiao-Xiang; Chen, Kai; Goswami, Amit; Chen, Yu; Kong, Qiu-Yan; Wang, Xiang-Yang
2017-08-01
Digitally reconstructed radiograph-based study. Using a computer-based method to determine what degree of pelvic rotation is acceptable for measuring the pelvic incidence (PI), pelvic tilt (PT), and sacral slope (SS). The effectiveness of a geometrical formula used to calculate the angle of pelvic rotation proposed in a previous article was assessed. It is unclear whether PI, PT, and SS are valid with pelvic rotation while acquiring a radiograph. Ten 3-dimensionally reconstructed models were established with software and placed in a neutral orientation to orient all of the bones in a standing position. Next, 140 digitally reconstructed radiographs were obtained by rotating the models around the longitudinal axis of each pelvis in the software from 0 to 30 degrees at 2.5-degree intervals. PI, PT, and SS were measured. The rotation angle was considered to be acceptable when the change in the measured angle (compared with the "correct" position) was <6 degrees. The rotation angle (α) on the images was calculated by a geometrical formula. Consistency between the measured value and the set angle was assessed. The acceptable maximum angle of rotation for reliable measurements of PI was 17.5 degrees, and the changes in PT and SS were within an acceptable range (<6 degrees) when the pelvic rotation increased from 0 to 30 degrees. The effectiveness of the geometrical formula was shown by the consistency between the set and the calculated rotation angles of the pelvis (intraclass correlation coefficient=0.99). Our study provides insight into the influence of pelvic rotation on the PI, PT, and SS. PI changes with pelvic rotation. The acceptable maximum angle for reliable values of PI, PT, and SS was 17.5 degrees, and the rotation angle of the pelvis on a lateral spinopelvic radiograph can be calculated reliably.
Karampatos, Sarah; Papaioannou, Alexandra; Beattie, Karen A; Maly, Monica R; Chan, Adrian; Adachi, Jonathan D; Pritchard, Janet M
2016-04-01
Determine the reliability of a magnetic resonance (MR) image segmentation protocol for quantifying intramuscular adipose tissue (IntraMAT), subcutaneous adipose tissue, total muscle and intermuscular adipose tissue (InterMAT) of the lower leg. Ten axial lower leg MRI slices were obtained from 21 postmenopausal women using a 1 Tesla peripheral MRI system. Images were analyzed using sliceOmatic™ software. The average cross-sectional areas of the tissues were computed for the ten slices. Intra-rater and inter-rater reliability were determined and expressed as the standard error of measurement (SEM) (absolute reliability) and intraclass coefficient (ICC) (relative reliability). Intra-rater and inter-rater reliability for IntraMAT were 0.991 (95% confidence interval [CI] 0.978-0.996, p < 0.05) and 0.983 (95% CI 0.958-9.993, p < 0.05), respectively. For the other soft tissue compartments, the ICCs were all >0.90 (p < 0.05). The absolute intra-rater and inter-rater reliability (expressed as SEM) for segmenting IntraMAT were 22.19 mm(2) (95% CI 16.97-32.04) and 78.89 mm(2) (95% CI 60.36-113.92), respectively. This is a reliable segmentation protocol for quantifying IntraMAT and other soft-tissue compartments of the lower leg. A standard operating procedure manual is provided to assist users, and SEM values can be used to estimate sample size and determine confidence in repeated measurements in future research.
Rossini, Gabriele; Parrini, Simone; Castroflorio, Tommaso; Deregibus, Andrea; Debernardi, Cesare L
2016-02-01
Our objective was to assess the accuracy, validity, and reliability of measurements obtained from virtual dental study models compared with those obtained from plaster models. PubMed, PubMed Central, National Library of Medicine Medline, Embase, Cochrane Central Register of Controlled Clinical trials, Web of Knowledge, Scopus, Google Scholar, and LILACs were searched from January 2000 to November 2014. A grading system described by the Swedish Council on Technology Assessment in Health Care and the Cochrane tool for risk of bias assessment were used to rate the methodologic quality of the articles. Thirty-five relevant articles were selected. The methodologic quality was high. No significant differences were observed for most of the studies in all the measured parameters, with the exception of the American Board of Orthodontics Objective Grading System. Digital models are as reliable as traditional plaster models, with high accuracy, reliability, and reproducibility. Landmark identification, rather than the measuring device or the software, appears to be the greatest limitation. Furthermore, with their advantages in terms of cost, time, and space required, digital models could be considered the new gold standard in current practice. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Software Requirements Analysis as Fault Predictor
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Waiting until the integration and system test phase to discover errors leads to more costly rework than resolving those same errors earlier in the lifecycle. Costs increase even more significantly once a software system has become operational. WE can assess the quality of system requirements, but do little to correlate this information either to system assurance activities or long-term reliability projections - both of which remain unclear and anecdotal. Extending earlier work on requirements accomplished by the ARM tool, measuring requirements quality information against code complexity and test data for the same system may be used to predict specific software modules containing high impact or deeply embedded faults now escaping in operational systems. Such knowledge would lead to more effective and efficient test programs. It may enable insight into whether a program should be maintained or started over.
Blended Training on Scientific Software: A Study on How Scientific Data Are Generated
ERIC Educational Resources Information Center
Skordaki, Efrosyni-Maria; Bainbridge, Susan
2018-01-01
This paper presents the results of a research study on scientific software training in blended learning environments. The investigation focused on training approaches followed by scientific software users whose goal is the reliable application of such software. A key issue in current literature is the requirement for a theory-substantiated…
Increasing the reliability of ecological models using modern software engineering techniques
Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff
2009-01-01
Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...
Toward improved peptide feature detection in quantitative proteomics using stable isotope labeling.
Nilse, Lars; Sigloch, Florian Christoph; Biniossek, Martin L; Schilling, Oliver
2015-08-01
Reliable detection of peptides in LC-MS data is a key algorithmic step in the analysis of quantitative proteomics experiments. While highly abundant peptides can be detected reliably by most modern software tools, there is much less agreement on medium and low-intensity peptides in a sample. The choice of software tools can have a big impact on the quantification of proteins, especially for proteins that appear in lower concentrations. However, in many experiments, it is precisely this region of less abundant but substantially regulated proteins that holds the biggest potential for discoveries. This is particularly true for discovery proteomics in the pharmacological sector with a specific interest in key regulatory proteins. In this viewpoint article, we discuss how the development of novel software algorithms allows us to study this region of the proteome with increased confidence. Reliable results are one of many aspects to be considered when deciding on a bioinformatics software platform. Deployment into existing IT infrastructures, compatibility with other software packages, scalability, automation, flexibility, and support need to be considered and are briefly addressed in this viewpoint article. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Wild, Christian; Eckhardt, Dave
1987-01-01
The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.
Singendonk, M M J; Rosen, R; Oors, J; Rommel, N; van Wijk, M P; Benninga, M A; Nurko, S; Omari, T I
2017-11-01
Subtyping achalasia by high-resolution manometry (HRM) is clinically relevant as response to therapy and prognosis have shown to vary accordingly. The aim of this study was to assess inter- and intrarater reliability of diagnosing achalasia and achalasia subtyping in children using the Chicago Classification (CC) V3.0. Six observers analyzed 40 pediatric HRM recordings (22 achalasia and 18 non-achalasia) twice by using dedicated analysis software (ManoView 3.0, Given Imaging, Los Angeles, CA, USA). Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), intrabolus pressurization pattern (IBP), and distal latency (DL) were extracted and analyzed hierarchically. Cohen's κ (2 raters) and Fleiss' κ (>2 raters) and the intraclass correlation coefficient (ICC) were used for categorical and ordinal data, respectively. Based on the results of dedicated analysis software only, intra- and interrater reliability was excellent and moderate (κ=0.89 and κ=0.52, respectively) for differentiating achalasia from non-achalasia. For subtyping achalasia, reliability decreased to substantial and fair (κ=0.72 and κ=0.28, respectively). When observers were allowed to change the software-driven diagnosis according to their own interpretation of the manometric patterns, intra- and interrater reliability increased for diagnosing achalasia (κ=0.98 and κ=0.92, respectively) and for subtyping achalasia (κ=0.79 and κ=0.58, respectively). Intra- and interrater agreement for diagnosing achalasia when using HRM and the CC was very good to excellent when results of automated analysis software were interpreted by experienced observers. More variability was seen when relying solely on the software-driven diagnosis and for subtyping achalasia. Therefore, diagnosing and subtyping achalasia should be performed in pediatric motility centers with significant expertise. © 2017 John Wiley & Sons Ltd.
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Sadegh Moghadam, Leila; Foroughan, Mahshid; Mohammadi Shahboulaghi, Farahnaz; Ahmadi, Fazlollah; Sajjadi, Moosa; Farhadi, Akram
2016-01-01
Background Perceptions of aging refer to individuals’ understanding of aging within their sociocultural context. Proper measurement of this concept in various societies requires accurate tools. Objective The present study was conducted with the aim to translate and validate the Brief Aging Perceptions Questionnaire (B-APQ) and assess its psychometric features in Iranian older adults. Method In this study, the Persian version of B-APQ was validated for 400 older adults. This questionnaire was translated into Persian according to the Wild et al’s model. The Persian version was validated using content, face, and construct (using confirmatory factor analysis) validities, and then its internal consistency and test–retest reliability were measured. Data were analyzed using the statistical software programs SPSS 18 and EQS-6.1. Results The confirmatory factor analysis confirmed construct validity and five subscales of B-APQ. Test–retest reliability with 3-week interval produced r=0.94. Cronbach’s alpha was found to be 0.75 for the whole questionnaire, and from 0.53 to 0.77 for the five factors. Conclusion The Persian version of B-APQ showed favorable validity and reliability, and thus it can be used for measuring different dimensions of perceptions of aging in Iranian older adults. PMID:27194907
NASA Astrophysics Data System (ADS)
Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.
2017-11-01
The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.
NASA Astrophysics Data System (ADS)
Hameetman, G. J.; Dekker, G. J.
1993-11-01
The Italian satellite (with a large Dutch contribution) SAX is a scientific satellite which has the mission to study roentgen sources. One main requirement for the Attitude and Orbit Control Subsystem (AOCS) is to achieve and maintain a stable pointing accuracy with a limit cycle of less than 90 arcsec during pointings of maximal 28 hours. The main SAX instrument, the Narrow Field Instrument, is highly sensitive to (indirect) radiation coming from the Sun. This sensitivity leads to another main requirement that under no circumstances the safe attitude domain may be left. The paper describes the application software in relation with the overall SAX AOCS subsystem, the CASE tools that have been used during the development, some advantages and disadvantages of the use of these tools, the measures taken to meet the more or less conflicting requirements of reliability and flexibility, and the lessons learned during development. The quality of the approach to the development has proven the (separately executed) hardware/software integration tests. During these tests, a neglectible number of software errors has been detected in the application software.
Carrasco, Alejandro; Jalali, Elnaz; Dhingra, Ajay; Lurie, Alan; Yadav, Sumit; Tadinada, Aditya
2017-06-01
The aim of this study was to compare a medical-grade PACS (picture archiving and communication system) monitor, a consumer-grade monitor, a laptop computer, and a tablet computer for linear measurements of height and width for specific implant sites in the posterior maxilla and mandible, along with visualization of the associated anatomical structures. Cone beam computed tomography (CBCT) scans were evaluated. The images were reviewed using PACS-LCD monitor, consumer-grade LCD monitor using CB-Works software, a 13″ MacBook Pro, and an iPad 4 using OsiriX DICOM reader software. The operators had to identify anatomical structures in each display using a 2-point scale. User experience between PACS and iPad was also evaluated by means of a questionnaire. The measurements were very similar for each device. P-values were all greater than 0.05, indicating no significant difference between the monitors for each measurement. The intraoperator reliability was very high. The user experience was similar in each category with the most significant difference regarding the portability where the PACS display received the lowest score and the iPad received the highest score. The iPad with retina display was comparable with the medical-grade monitor, producing similar measurements and image visualization, and thus providing an inexpensive, portable, and reliable screen to analyze CBCT images in the operating room during the implant surgery.
Design and validation of an automated hydrostatic weighing system.
McClenaghan, B A; Rocchio, L
1986-08-01
The purpose of this study was to design and evaluate the validity of an automated technique to assess body density using a computerized hydrostatic weighing system. An existing hydrostatic tank was modified and interfaced with a microcomputer equipped with an analog-to-digital converter. Software was designed to input variables, control the collection of data, calculate selected measurements, and provide a summary of the results of each session. Validity of the data obtained utilizing the automated hydrostatic weighing system was estimated by: evaluating the reliability of the transducer/computer interface to measure objects of known underwater weight; comparing the data against a criterion measure; and determining inter-session subject reliability. Values obtained from the automated system were found to be highly correlated with known underwater weights (r = 0.99, SEE = 0.0060 kg). Data concurrently obtained utilizing the automated system and a manual chart recorder were also found to be highly correlated (r = 0.99, SEE = 0.0606 kg). Inter-session subject reliability was determined utilizing data collected on subjects (N = 16) tested on two occasions approximately 24 h apart. Correlations revealed high relationships between measures of underwater weight (r = 0.99, SEE = 0.1399 kg) and body density (r = 0.98, SEE = 0.00244 g X cm-1). Results indicate that a computerized hydrostatic weighing system is a valid and reliable method for determining underwater weight.
Clayson, Peter E; Miller, Gregory A
2017-01-01
Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.
The Software Correlator of the Chinese VLBI Network
NASA Technical Reports Server (NTRS)
Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli
2010-01-01
The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.
Data systems and computer science: Software Engineering Program
NASA Technical Reports Server (NTRS)
Zygielbaum, Arthur I.
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.
Design and setup of intermittent-flow respirometry system for aquatic organisms.
Svendsen, M B S; Bushnell, P G; Steffensen, J F
2016-01-01
Intermittent-flow respirometry is an experimental protocol for measuring oxygen consumption in aquatic organisms that utilizes the best features of closed (stop-flow) and flow-through respirometry while eliminating (or at least reducing) some of their inherent problems. By interspersing short periods of closed-chamber oxygen consumption measurements with regular flush periods, accurate oxygen uptake rate measurements can be made without the accumulation of waste products, particularly carbon dioxide, which may confound results. Automating the procedure with easily available hardware and software further reduces error by allowing many measurements to be made over long periods thereby minimizing animal stress due to acclimation issues. This paper describes some of the fundamental principles that need to be considered when designing and carrying out automated intermittent-flow respirometry (e.g. chamber size, flush rate, flush time, chamber mixing, measurement periods and temperature control). Finally, recent advances in oxygen probe technology and open source automation software will be discussed in the context of assembling relatively low cost and reliable measurement systems. © 2015 The Fisheries Society of the British Isles.
Naeem, Naghma; Muijtjens, Arno
2015-04-01
The psychological construct of emotional intelligence (EI), its theoretical models, measurement instruments and applications have been the subject of several research studies in health professions education. The objective of the current study was to investigate the factorial validity and reliability of a bilingual version of the Schutte Self Report Emotional Intelligence Scale (SSREIS) in an undergraduate Arab medical student population. The study was conducted during April-May 2012. A cross-sectional survey design was employed. A sample (n = 467) was obtained from undergraduate medical students belonging to the male and female medical college of King Saud University, Riyadh, Saudi Arabia. Exploratory and confirmatory factor analysis was performed using SPSS 16.0 and AMOS 4.0 statistical software to determine the factor structure. Reliability was determined using Cronbach's alpha statistics. The results obtained using an undergraduate Arab medical student sample supported a multidimensional; three factor structure of the SSREIS. The three factors are Optimism, Awareness-of-Emotions and Use-of-Emotions. The reliability (Cronbach's alpha) for the three subscales was 0.76, 0.72 and 0.55, respectively. Emotional intelligence is a multifactorial construct (three factors). The bilingual version of the SSREIS is a valid and reliable measure of trait emotional intelligence in an undergraduate Arab medical student population.
CT-scout based, semi-automated vertebral morphometry after digital image enhancement.
Glinkowski, Wojciech M; Narloch, Jerzy
2017-09-01
Radiographic diagnosis of osteoporotic vertebral fracture is necessary to reduce its substantial associated morbidity. Computed tomography (CT) scout has recently been demonstrated as a reliable technique for vertebral fracture diagnosis. Software assistance may help to overcome some limitations of that diagnostics. We aimed to evaluate whether digital image enhancement improved the capacity of one of the existing software to detect fractures semi-automatically. CT scanograms of patients suffering from osteoporosis, with or without vertebral fractures were analyzed. The original set of CT scanograms were triplicated and digitally modified to improve edge detection using three different techniques: SHARPENING, UNSHARP MASKING, and CONVOLUTION. The manual morphometric analysis identified 1485 vertebrae, 200 of which were classified as fractured. Unadjusted morphometry (AUTOMATED with no digital enhancement) found 63 fractures, 33 of which were true positive (i.e., it correctly identified 52% of the fractures); SHARPENING detected 57 fractures (30 true positives, 53%); UNSHARP MASKING yielded 30 (13 true positives, 43%); and CONVOLUTION found 24 fractures (9 true positives, 38%). The intra-reader reliability for height ratios did not significantly improve with image enhancement (kappa ranged 0.22-0.41 for adjusted measurements and 0.16-0.38 for unadjusted). Similarly, the inter-reader agreement for prevalent fractures did not significantly improve with image enhancement (kappa 0.29-0.56 and -0.01 to 0.23 for adjusted and unadjusted measurements, respectively). Our results suggest that digital image enhancement does not improve software-assisted vertebral fracture detection by CT scout. Copyright © 2017 Elsevier B.V. All rights reserved.
Fault tolerant software modules for SIFT
NASA Technical Reports Server (NTRS)
Hecht, M.; Hecht, H.
1982-01-01
The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.
Muller, Bart; Hofbauer, Marcus; Rahnemai-Azar, Amir Ata; Wolf, Megan; Araki, Daisuke; Hoshino, Yuichi; Araujo, Paulo; Debski, Richard E; Irrgang, James J; Fu, Freddie H; Musahl, Volker
2016-01-01
The pivot shift test is a commonly used clinical examination by orthopedic surgeons to evaluate knee function following injury. However, the test can only be graded subjectively by the examiner. Therefore, the purpose of this study is to develop software for a computer tablet to quantify anterior translation of the lateral knee compartment during the pivot shift test. Based on the simple image analysis method, software for a computer tablet was developed with the following primary design constraint - the software should be easy to use in a clinical setting and it should not slow down an outpatient visit. Translation of the lateral compartment of the intact knee was 2.0 ± 0.2 mm and for the anterior cruciate ligament-deficient knee was 8.9 ± 0.9 mm (p < 0.001). Intra-tester (ICC range = 0.913 to 0.999) and inter-tester (ICC = 0.949) reliability were excellent for the repeatability assessments. Overall, the average percent error of measuring simulated translation of the lateral knee compartment with the tablet parallel to the monitor increased from 2.8% at 50 cm distance to 7.7% at 200 cm. Deviation from the parallel position of the tablet did not have a significant effect until a tablet angle of 45°. Average percent error during anterior translation of the lateral knee compartment of 6mm was 2.2% compared to 6.2% for 2 mm of translation. The software provides reliable, objective, and quantitative data on translation of the lateral knee compartment during the pivot shift test and meets the design constraints posed by the clinical setting.
Photogrammetry procedures applied to anthropometry.
Okimoto, Maria Lúcialeite Ribeiro; Klein, Alison Alfred
2012-01-01
This study aims to evaluate the reliability and establish procedures for the use of digital photogrammetry in anthropometric measurements of the human hand. The methodology included the construction of a platform to allow the placement of the hand always equivalent to a distance of the camera lens and to annul the effects of parallax. We developed a software to perform the measurements from the images and built up a subject of proof in a cast from a negative mold, this object was subjected to measurements with digital photogrammetry using the data collection platform in caliper and the Coordinate Measuring Machine (MMC). The results of the application of photogrammetry in the data collection segment hand, allow us to conclude that photogrammetry is an effective presenting precision coefficient below 0.940. Within normal and acceptable values, given the magnitude of the data used in anthropometry. It was concluded photogrammetry then be reliable, accurate and efficient for carrying out anthropometric surveys of population, and presents less difficulty to collect in-place.
Digital assessment of the fetal alcohol syndrome facial phenotype: reliability and agreement study.
Tsang, Tracey W; Laing-Aiken, Zoe; Latimer, Jane; Fitzpatrick, James; Oscar, June; Carter, Maureen; Elliott, Elizabeth J
2017-01-01
To examine the three facial features of fetal alcohol syndrome (FAS) in a cohort of Australian Aboriginal children from two-dimensional digital facial photographs to: (1) assess intrarater and inter-rater reliability; (2) identify the racial norms with the best fit for this population; and (3) assess agreement with clinician direct measures. Photographs and clinical data for 106 Aboriginal children (aged 7.4-9.6 years) were sourced from the Lililwan Project . Fifty-eight per cent had a confirmed prenatal alcohol exposure and 13 (12%) met the Canadian 2005 criteria for FAS/partial FAS. Photographs were analysed using the FAS Facial Photographic Analysis Software to generate the mean PFL three-point ABC-Score, five-point lip and philtrum ranks and four-point face rank in accordance with the 4-Digit Diagnostic Code. Intrarater and inter-rater reliability of digital ratings was examined in two assessors. Caucasian or African American racial norms for PFL and lip thickness were assessed for best fit; and agreement between digital and direct measurement methods was assessed. Reliability of digital measures was substantial within (kappa: 0.70-1.00) and between assessors (kappa: 0.64-0.89). Clinician and digital ratings showed moderate agreement (kappa: 0.47-0.58). Caucasian PFL norms and the African American Lip-Philtrum Guide 2 provided the best fit for this cohort. In an Aboriginal cohort with a high rate of FAS, assessment of facial dysmorphology using digital methods showed substantial inter- and intrarater reliability. Digital measurement of features has high reliability and until data are available from a larger population of Aboriginal children, the African American Lip-Philtrum Guide 2 and Caucasian (Strömland) PFL norms provide the best fit for Australian Aboriginal children.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
Runtime Speculative Software-Only Fault Tolerance
2012-06-01
reliability of RSFT, a in-depth analysis on its window of vulnerability is also discussed and measured via simulated fault injection. The performance...propagation of faults through the entire program. For optimal performance, these techniques have to use herotic alias analysis to find the minimum set of...affect program output. No program source code or alias analysis is needed to analyze the fault propagation ahead of time. 2.3 Limitations of Existing
NASA Astrophysics Data System (ADS)
Andarani, Pertiwi; Setiyo Huboyo, Haryono; Setyanti, Diny; Budiawan, Wiwik
2018-02-01
Noise is considered as one of the main environmental impact of Adi Soemarmo International Airport (ASIA), the second largest airport in Central Java Province, Indonesia. In order to manage the noise of airport, airport noise mapping is necessary. However, a model that requires simple input but still reliable was not available in ASIA. Therefore, the objective of this study are to develop model using Matlab software, to verify its reliability by measuring actual noise exposure, and to analyze the area of noise levels‥ The model was developed based on interpolation or extrapolation of identified Noise-Power-Distance (NPD) data. In accordance with Indonesian Government Ordinance No.40/2012, the noise metric used is WECPNL (Weighted Equivalent Continuous Perceived Noise Level). Based on this model simulation, there are residence area in the region of noise level II (1.912 km2) and III (1.16 km2) and 18 school buildings in the area of noise levels I, II, and III. These land-uses are actually prohibited unless noise insulation is equipped. The model using Matlab in the case of Adi Soemarmo International Airport is valid based on comparison of the field measurement (6 sampling points). However, it is important to validate the model again once the case study (the airport) is changed.
NASA Astrophysics Data System (ADS)
Engel, P.; Schweimler, B.
2016-04-01
The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the Neubrandenburg University of Applied Sciences (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.
Warning system against locomotive driving wheel flaccidity
NASA Astrophysics Data System (ADS)
Luo, Peng
2014-09-01
Causes of locomotive relaxation are discussed. Alarm system against locomotive driving wheel flaccidity is designed by means of techniques of infrared temperature measurement and Hall sensor measurement. The design scheme of the system, the principle of detecting locomotive driving wheel flaccidity with temperature and Hall sensor is introduced, threshold temperature of infrared alarm is determined. The circuit system is designed by microcontroller technology and the software is designed with the assembly language. The experiment of measuring the flaccid displacement with Hall sensor measurement is simulated. The results show that the system runs well with high reliability and low cost, which has a wide prospect of application and popularization.
Big Software for SmallSats: Adapting cFS to CubeSat Missions
NASA Technical Reports Server (NTRS)
Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James
2015-01-01
Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.
Development of an Environment for Software Reliability Model Selection
1992-09-01
now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important
A Course in Real-Time Embedded Software
ERIC Educational Resources Information Center
Archibald, J. K.; Fife, W. S.
2007-01-01
Embedded systems are increasingly pervasive, and the creation of reliable controlling software offers unique challenges. Embedded software must interact directly with hardware, it must respond to events in a time-critical fashion, and it typically employs concurrency to meet response time requirements. This paper describes an innovative course…
Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures
2017-12-01
This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the
A Novel Passive Wireless Sensing Method for Concrete Chloride Ion Concentration Monitoring.
Zhou, Shuangxi; Sheng, Wei; Deng, Fangming; Wu, Xiang; Fu, Zhihui
2017-12-11
In this paper, a novel approach for concrete chloride ion concentration measuring based on passive and wireless sensor tag is proposed. The chloride ion sensor based on RFID communication protocol is consisting of an energy harvesting and management circuit, a low dropout voltage regulator, a MCU, a RFID tag chip and a pair of electrodes. The proposed sensor harvests energy radiated by the RFID reader to power its circuitry. To improve the stability of power supply, a three-stage boost rectifier is customized to rectify the harvested power into dc power and step-up the voltage. Since the measured data is wirelessly transmitted, it contains miscellaneous noises which would decrease the accuracy of measuring. Thus, in this paper, the wavelet denoising method is adopted to denoise the raw data. Besides, a monitoring software is developed to display the measurement results in real-time. The measurement results indicate that the proposed passive sensor tag can achieve a reliable communication distance of 16.3 m and can reliably measure the chloride ion concentration in concrete.
A Monte Carlo software for the 1-dimensional simulation of IBIC experiments
NASA Astrophysics Data System (ADS)
Forneris, J.; Jakšić, M.; Pastuović, Ž.; Vittone, E.
2014-08-01
The ion beam induced charge (IBIC) microscopy is a valuable tool for the analysis of the electronic properties of semiconductors. In this work, a recently developed Monte Carlo approach for the simulation of IBIC experiments is presented along with a self-standing software equipped with graphical user interface. The method is based on the probabilistic interpretation of the excess charge carrier continuity equations and it offers to the end-user the full control not only of the physical properties ruling the induced charge formation mechanism (i.e., mobility, lifetime, electrostatics, device's geometry), but also of the relevant experimental conditions (ionization profiles, beam dispersion, electronic noise) affecting the measurement of the IBIC pulses. Moreover, the software implements a novel model for the quantitative evaluation of the radiation damage effects on the charge collection efficiency degradation of ion-beam-irradiated devices. The reliability of the model implementation is then validated against a benchmark IBIC experiment.
Rule groupings: A software engineering approach towards verification of expert systems
NASA Technical Reports Server (NTRS)
Mehrotra, Mala
1991-01-01
Currently, most expert system shells do not address software engineering issues for developing or maintaining expert systems. As a result, large expert systems tend to be incomprehensible, difficult to debug or modify and almost impossible to verify or validate. Partitioning rule based systems into rule groups which reflect the underlying subdomains of the problem should enhance the comprehensibility, maintainability, and reliability of expert system software. Attempts were made to semiautomatically structure a CLIPS rule base into groups of related rules that carry the same type of information. Different distance metrics that capture relevant information from the rules for grouping are discussed. Two clustering algorithms that partition the rule base into groups of related rules are given. Two independent evaluation criteria are developed to measure the effectiveness of the grouping strategies. Results of the experiment with three sample rule bases are presented.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
NASA Astrophysics Data System (ADS)
The subjects discussed are related to LSI/VLSI based subscriber transmission and customer access for the Integrated Services Digital Network (ISDN), special applications of fiber optics, ISDN and competitive telecommunication services, technical preparations for the Geostationary-Satellite Orbit Conference, high-capacity statistical switching fabrics, networking and distributed systems software, adaptive arrays and cancelers, synchronization and tracking, speech processing, advances in communication terminals, full-color videotex, and a performance analysis of protocols. Advances in data communications are considered along with transmission network plans and progress, direct broadcast satellite systems, packet radio system aspects, radio-new and developing technologies and applications, the management of software quality, and Open Systems Interconnection (OSI) aspects of telematic services. Attention is given to personal computers and OSI, the role of software reliability measurement in information systems, and an active array antenna for the next-generation direct broadcast satellite.
Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center
NASA Astrophysics Data System (ADS)
Berger, Thomas
2016-07-01
The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.
Romero-Delmastro, Alejandro; Kadioglu, Onur; Currier, G Frans; Cook, Tanner
2014-08-01
Cone-beam computed tomography images have been previously used for evaluation of alveolar bone levels around teeth before, during, and after orthodontic treatment. Protocols described in the literature have been vague, have used unstable landmarks, or have required several software programs, file conversions, or hand tracings, among other factors that could compromise the precision of the measurements. The purposes of this article are to describe a totally digital tooth-based superimposition method for the quantitative assessment of alveolar bone levels and to evaluate its reliability. Ultra cone-beam computed tomography images (0.1-mm reconstruction) from 10 subjects were obtained from the data pool of the University of Oklahoma; 80 premolars were measured twice by the same examiner and a third time by a second examiner to determine alveolar bone heights and thicknesses before and more than 6 months after orthodontic treatment using OsiriX (version 3.5.1; Pixeo, Geneva, Switzerland). Intraexaminer and interexaminer reliabilities were evaluated, and Dahlberg's formula was used to calculate the error of the measurements. Cross-sectional and longitudinal evaluations of alveolar bone levels were possible using a digital tooth-based superimposition method. The mean differences for buccal alveolar crest heights and thicknesses were below 0.10 mm for the same examiner and below 0.17 mm for all examiners. The ranges of errors for any measurement were between 0.02 and 0.23 mm for intraexaminer errors, and between 0.06 and 0.29 mm for interexaminer errors. This protocol can be used for cross-sectional or longitudinal assessment of alveolar bone levels with low interexaminer and intraexaminer errors, and it eliminates the use of less reliable or less stable landmarks and the need for multiple software programs and image printouts. Standardization of the methods for bone assessment in orthodontics is necessary; this method could be the answer to this need. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC.
The Panel on Guidelines for Statistical Software was organized in 1990 to document, assess, and prioritize problem areas regarding quality and reliability of statistical software; present prototype guidelines in high priority areas; and make recommendations for further research and discussion. This document provides the following papers presented…
Research of real-time communication software
NASA Astrophysics Data System (ADS)
Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong
2003-11-01
Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.
Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia
2016-06-01
To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.
1986-01-01
Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.
Marcos-Garcés, V; Harvat, M; Molina Aguilar, P; Ferrández Izquierdo, A; Ruiz-Saurí, A
2017-08-01
Measurement of collagen bundle orientation in histopathological samples is a widely used and useful technique in many research and clinical scenarios. Fourier analysis is the preferred method for performing this measurement, but the most appropriate staining and microscopy technique remains unclear. Some authors advocate the use of Haematoxylin-Eosin (H&E) and confocal microscopy, but there are no studies comparing this technique with other classical collagen stainings. In our study, 46 human skin samples were collected, processed for histological analysis and stained with Masson's trichrome, Picrosirius red and H&E. Five microphotographs of the reticular dermis were taken with a 200× magnification with light microscopy, polarized microscopy and confocal microscopy, respectively. Two independent observers measured collagen bundle orientation with semiautomated Fourier analysis with the Image-Pro Plus 7.0 software and three independent observers performed a semiquantitative evaluation of the same parameter. The average orientation for each case was calculated with the values of the five pictures. We analyzed the interrater reliability, the consistency between Fourier analysis and average semiquantitative evaluation and the consistency between measurements in Masson's trichrome, Picrosirius red and H&E-confocal. Statistical analysis for reliability and agreement was performed with the SPSS 22.0 software and consisted of intraclass correlation coefficient (ICC), Bland-Altman plots and limits of agreement and coefficient of variation. Interrater reliability was almost perfect (ICC > 0.8) with all three histological and microscopy techniques and always superior in Fourier analysis than in average semiquantitative evaluation. Measurements were consistent between Fourier analysis by one observer and average semiquantitative evaluation by three observers, with an almost perfect agreement with Masson's trichrome and Picrosirius red techniques (ICC > 0.8) and a strong agreement with H&E-confocal (0.7 < ICC < 0.8). Comparison of measurements between the three techniques for the same observer showed an almost perfect agreement (ICC > 0.8), better with Fourier analysis than with semiquantitative evaluation (single and average). These results in nonpathological skin samples were also confirmed in a preliminary analysis in eight scleroderma skin samples. Our results show that Masson's trichrome and Picrosirius red are consistent with H&E-confocal for measuring collagen bundle orientation in histological samples and could thus be used indistinctly for this purpose. Fourier analysis is superior to average semiquantitative evaluation and should keep being used as the preferred method. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Dinsdale, Graham; Moore, Tonia; O'Leary, Neil; Tresadern, Philip; Berks, Michael; Roberts, Christopher; Manning, Joanne; Allen, John; Anderson, Marina; Cutolo, Maurizio; Hesselstrand, Roger; Howell, Kevin; Pizzorni, Carmen; Smith, Vanessa; Sulli, Alberto; Wildt, Marie; Taylor, Christopher; Murray, Andrea; Herrick, Ariane L
2017-07-01
Our aim was to assess the reliability of nailfold capillary assessment in terms of image evaluability, image severity grade ('normal', 'early', 'active', 'late'), capillary density, capillary (apex) width, and presence of giant capillaries, and also to gain further insight into differences in these parameters between patients with systemic sclerosis (SSc), patients with primary Raynaud's phenomenon (PRP) and healthy control subjects. Videocapillaroscopy images (magnification 300×) were acquired from all 10 digits from 173 participants: 101 patients with SSc, 22 with PRP and 50 healthy controls. Ten capillaroscopy experts from 7 European centres evaluated the images. Custom image mark-up software allowed extraction of the following outcome measures: overall grade ('normal', 'early', 'active', 'late', 'non-specific', or 'ungradeable'), capillary density (vessels/mm), mean vessel apical width, and presence of giant capillaries. Observers analysed a median of 129 images each. Evaluability (i.e. the availability of measures) varied across outcome measures (e.g. 73.0% for density and 46.2% for overall grade in patients with SSc). Intra-observer reliability for evaluability was consistently higher than inter- (e.g. for density, intra-class correlation coefficient [ICC] was 0.71 within and 0.14 between observers). Conditional on evaluability, both intra- and inter-observer reliability were high for grade (ICC 0.93 and 0.78 respectively), density (0.91 and 0.64) and width (0.91 and 0.85). Evaluability is one of the major challenges in assessing nailfold capillaries. However, when images are evaluable, the high intra- and inter-reliabilities suggest that overall image grade, capillary density and apex width have potential as outcome measures in longitudinal studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Berger, Steve; Hasler, Carol-Claudius; Grant, Caroline A; Zheng, Guoyan; Schumann, Steffen; Büchler, Philippe
2017-01-01
The aim of this study was to validate a new program which aims at measuring the three-dimensional length of the spine's midline based on two calibrated orthogonal radiographic images. The traditional uniplanar T1-S1 measurement method is not reflecting the actual three dimensional curvature of a scoliotic spine and is therefore not accurate. The Spinal Measurement Software (SMS) is an alternative to conveniently measure the true spine's length. The validity, inter- and intra-observer variability and usability of the program were evaluated. The usability was quantified based on a subjective questionnaire filled by eight participants using the program for the first time. The validity and variability were assessed by comparing the length of five phantom spines measured based on CT-scan data and on radiographic images with the SMS. The lengths were measured independently by each participant using both techniques. The SMS is easy and intuitive to use, even for non-clinicians. The SMS measured spinal length with an error below 2 millimeters compared to length obtained using CT scan datasets. The inter- and intra-observer variability of the SMS measurements was below 5 millimeters. The SMS provides accurate measurement of the spinal length based on orthogonal radiographic images. The software is easy to use and could easily integrate the clinical workflow and replace current approximations of the spinal length based on a single radiographic image such as the traditional T1-S1 measurement. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
Avrutin, Egor; Moisey, Lesley L; Zhang, Roselyn; Khattab, Jenna; Todd, Emma; Premji, Tahira; Kozar, Rosemary; Heyland, Daren K; Mourtzakis, Marina
2017-12-06
Computed tomography (CT) scans performed during routine hospital care offer the opportunity to quantify skeletal muscle and predict mortality and morbidity in intensive care unit (ICU) patients. Existing methods of muscle cross-sectional area (CSA) quantification require specialized software, training, and time commitment that may not be feasible in a clinical setting. In this article, we explore a new screening method to identify patients with low muscle mass. We analyzed 145 scans of elderly ICU patients (≥65 years old) using a combination of measures obtained with a digital ruler, commonly found on hospital radiological software. The psoas and paraspinal muscle groups at the level of the third lumbar vertebra (L3) were evaluated by using 2 linear measures each and compared with an established method of CT image analysis of total muscle CSA in the L3 region. There was a strong association between linear measures of psoas and paraspinal muscle groups and total L3 muscle CSA (R 2 = 0.745, P < 0.001). Linear measures, age, and sex were included as covariates in a multiple logistic regression to predict those with low muscle mass; receiver operating characteristic (ROC) area under the curve (AUC) of the combined psoas and paraspinal linear index model was 0.920. Intraclass correlation coefficients (ICCs) were used to evaluate intrarater and interrater reliability, resulting in scores of 0.979 (95% CI: 0.940-0.992) and 0.937 (95% CI: 0.828-0.978), respectively. A digital ruler can reliably predict L3 muscle CSA, and these linear measures may be used to identify critically ill patients with low muscularity who are at risk for worse clinical outcomes. © 2017 American Society for Parenteral and Enteral Nutrition.
One-dimensional angular-measurement-based stitching interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less
One-dimensional angular-measurement-based stitching interferometry
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2018-04-05
In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less
Pedersen, E S L; Danquah, I H; Petersen, C B; Tolstrup, J S
2016-12-03
Accelerometers can obtain precise measurements of movements during the day. However, the individual activity pattern varies from day-to-day and there is limited evidence on measurement days needed to obtain sufficient reliability. The aim of this study was to examine variability in accelerometer derived data on sedentary behaviour and physical activity at work and in leisure-time during week days among Danish office employees. We included control participants (n = 135) from the Take a Stand! Intervention; a cluster randomized controlled trial conducted in 19 offices. Sitting time and physical activity were measured using an ActiGraph GT3X+ fixed on the thigh and data were processed using Acti4 software. Variability was examined for sitting time, standing time, steps and time spent in moderate-to-vigorous physical activity (MVPA) per day by multilevel mixed linear regression modelling. Results of this study showed that the number of days needed to obtain a reliability of 80% when measuring sitting time was 4.7 days for work and 5.5 days for leisure time. For physical activity at work, 4.0 days and 4.2 days were required to measure steps and MVPA, respectively. During leisure time, more monitoring time was needed to reliably estimate physical activity (6.8 days for steps and 5.8 days for MVPA). The number of measurement days needed to reliably estimate activity patterns was greater for leisure time than for work time. The domain specific variability is of great importance to researchers and health promotion workers planning to use objective measures of sedentary behaviour and physical activity. Clinical trials NCT01996176 .
Perraton, Luke G.; Bower, Kelly J.; Adair, Brooke; Pua, Yong-Hao; Williams, Gavin P.; McGaw, Rebekah
2015-01-01
Introduction Hand-held dynamometry (HHD) has never previously been used to examine isometric muscle power. Rate of force development (RFD) is often used for muscle power assessment, however no consensus currently exists on the most appropriate method of calculation. The aim of this study was to examine the reliability of different algorithms for RFD calculation and to examine the intra-rater, inter-rater, and inter-device reliability of HHD as well as the concurrent validity of HHD for the assessment of isometric lower limb muscle strength and power. Methods 30 healthy young adults (age: 23±5yrs, male: 15) were assessed on two sessions. Isometric muscle strength and power were measured using peak force and RFD respectively using two HHDs (Lafayette Model-01165 and Hoggan microFET2) and a criterion-reference KinCom dynamometer. Statistical analysis of reliability and validity comprised intraclass correlation coefficients (ICC), Pearson correlations, concordance correlations, standard error of measurement, and minimal detectable change. Results Comparison of RFD methods revealed that a peak 200ms moving window algorithm provided optimal reliability results. Intra-rater, inter-rater, and inter-device reliability analysis of peak force and RFD revealed mostly good to excellent reliability (coefficients ≥ 0.70) for all muscle groups. Concurrent validity analysis showed moderate to excellent relationships between HHD and fixed dynamometry for the hip and knee (ICCs ≥ 0.70) for both peak force and RFD, with mostly poor to good results shown for the ankle muscles (ICCs = 0.31–0.79). Conclusions Hand-held dynamometry has good to excellent reliability and validity for most measures of isometric lower limb strength and power in a healthy population, particularly for proximal muscle groups. To aid implementation we have created freely available software to extract these variables from data stored on the Lafayette device. Future research should examine the reliability and validity of these variables in clinical populations. PMID:26509265
De Backer, Daniel; Marx, Gernot; Tan, Andrew; Junker, Christopher; Van Nuffelen, Marc; Hüter, Lars; Ching, Willy; Michard, Frédéric; Vincent, Jean-Louis
2011-02-01
Second-generation FloTrac software has been shown to reliably measure cardiac output (CO) in cardiac surgical patients. However, concerns have been raised regarding its accuracy in vasoplegic states. The aim of the present multicenter study was to investigate the accuracy of the third-generation software in patients with sepsis, particularly when total systemic vascular resistance (TSVR) is low. Fifty-eight septic patients were included in this prospective observational study in four university-affiliated ICUs. Reference CO was measured by bolus pulmonary thermodilution (iCO) using 3-5 cold saline boluses. Simultaneously, CO was computed from the arterial pressure curve recorded on a computer using the second-generation (CO(G2)) and third-generation (CO(G3)) FloTrac software. CO was also measured by semi-continuous pulmonary thermodilution (CCO). A total of 401 simultaneous measurements of iCO, CO(G2), CO(G3), and CCO were recorded. The mean (95%CI) biases between CO(G2) and iCO, CO(G3) and iCO, and CCO and iCO were -10 (-15 to -5)% [-0.8 (-1.1 to -0.4) L/min], 0 (-4 to 4)% [0 (-0.3 to 0.3) L/min], and 9 (6-13)% [0.7 (0.5-1.0) L/min], respectively. The percentage errors were 29 (20-37)% for CO(G2), 30 (24-37)% for CO(G3), and 28 (22-34)% for CCO. The difference between iCO and CO(G2) was significantly correlated with TSVR (r(2) = 0.37, p < 0.0001). A very weak (r(2) = 0.05) relationship was also observed for the difference between iCO and CO(G3). In patients with sepsis, the third-generation FloTrac software is more accurate, as precise, and less influenced by TSVR than the second-generation software.
Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Gil, Joon-Min
2015-03-01
The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Development of a Mammographic Image Processing Environment Using MATLAB.
1994-12-01
S. .° : i. .... ...... Correctness Reliability Efficiency Integrity Usability Figure 1.2 - McCall’s software quality factors [ Pressman , 1987] 1.4...quality factors [ Pressman , 1987] 3-13 Each quality factor itself is related to independent attributes called criteria [Cooper and Fisher, 1979], or...metrics by [ Pressman , 1987], that can be used to judge, define, and measure quality [Cooper and Fisher, 1979]. Figure 3.9 shows the criteria that are used
A proven approach for more effective software development and maintenance
NASA Technical Reports Server (NTRS)
Pajerski, Rose; Hall, Dana; Sinclair, Craig
1994-01-01
Modern space flight mission operations and associated ground data systems are increasingly dependent upon reliable, quality software. Critical functions such as command load preparation, health and status monitoring, communications link scheduling and conflict resolution, and transparent gateway protocol conversion are routinely performed by software. Given budget constraints and the ever increasing capabilities of processor technology, the next generation of control centers and data systems will be even more dependent upon software across all aspects of performance. A key challenge now is to implement improved engineering, management, and assurance processes for the development and maintenance of that software; processes that cost less, yield higher quality products, and that self-correct for continual improvement evolution. The NASA Goddard Space Flight Center has a unique experience base that can be readily tapped to help solve the software challenge. Over the past eighteen years, the Software Engineering Laboratory within the code 500 Flight Dynamics Division has evolved a software development and maintenance methodology that accommodates the unique characteristics of an organization while optimizing and continually improving the organization's software capabilities. This methodology relies upon measurement, analysis, and feedback much analogous to that of control loop systems. It is an approach with a time-tested track record proven through repeated applications across a broad range of operational software development and maintenance projects. This paper describes the software improvement methodology employed by the Software Engineering Laboratory, and how it has been exploited within the Flight Dynamics Division with GSFC Code 500. Examples of specific improvement in the software itself and its processes are presented to illustrate the effectiveness of the methodology. Finally, the initial findings are given when this methodology was applied across the mission operations and ground data systems software domains throughout Code 500.
A second generation experiment in fault-tolerant software
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
The primary goal was to determine whether the application of fault tolerance to software increases its reliability if the cost of production is the same as for an equivalent nonfault tolerance version derived from the same requirements specification. Software development protocols are discussed. The feasibility of adapting to software design fault tolerance the technique of N-fold Modular Redundancy with majority voting was studied.
Nguyen, Anh-Dung; Boling, Michelle C; Slye, Carrie A; Hartley, Emily M; Parisi, Gina L
2013-01-01
Accurate, efficient, and reliable measurement methods are essential to prospectively identify risk factors for knee injuries in large cohorts. To determine tester reliability using digital photographs for the measurement of static lower extremity alignment (LEA) and whether values quantified with an electromagnetic motion-tracking system are in agreement with those quantified with clinical methods and digital photographs. Descriptive laboratory study. Laboratory. Thirty-three individuals participated and included 17 (10 women, 7 men; age = 21.7 ± 2.7 years, height = 163.4 ± 6.4 cm, mass = 59.7 ± 7.8 kg, body mass index = 23.7 ± 2.6 kg/m2) in study 1, in which we examined the reliability between clinical measures and digital photographs in 1 trained and 1 novice investigator, and 16 (11 women, 5 men; age = 22.3 ± 1.6 years, height = 170.3 ± 6.9 cm, mass = 72.9 ± 16.4 kg, body mass index = 25.2 ± 5.4 kg/m2) in study 2, in which we examined the agreement among clinical measures, digital photographs, and an electromagnetic tracking system. We evaluated measures of pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length, and tibia length. Clinical measures were assessed using clinically accepted methods. Frontal- and sagittal-plane digital images were captured and imported into a computer software program. Anatomic landmarks were digitized using an electromagnetic tracking system to calculate static LEA. Intraclass correlation coefficients and standard errors of measurement were calculated to examine tester reliability. We calculated 95% limits of agreement and used Bland-Altman plots to examine agreement among clinical measures, digital photographs, and an electromagnetic tracking system. Using digital photographs, fair to excellent intratester (intraclass correlation coefficient range = 0.70-0.99) and intertester (intraclass correlation coefficient range = 0.75-0.97) reliability were observed for static knee alignment and limb-length measures. An acceptable level of agreement was observed between clinical measures and digital pictures for limb-length measures. When comparing clinical measures and digital photographs with the electromagnetic tracking system, an acceptable level of agreement was observed in measures of static knee angles and limb-length measures. The use of digital photographs and an electromagnetic tracking system appears to be an efficient and reliable method to assess static knee alignment and limb-length measurements.
Age estimation by dentin translucency measurement using digital method: An institutional study
Gupta, Shalini; Chandra, Akhilesh; Agnihotri, Archana; Gupta, Om Prakash; Maurya, Niharika
2017-01-01
Aims: The aims of the present study were to measure translucency on sectioned teeth using available computer hardware and software, to correlate dimensions of root dentin translucency with age, and to assess whether translucency is reliable for age estimation. Materials and Methods: A pilot study was done on 62 freshly extracted single-rooted permanent teeth from 62 different individuals (35 males and 27 females) and their 250 μm thick sections were prepared by micromotor, carborundum disks, and Arkansas stone. Each tooth section was scanned and the images were opened in the Adobe Photoshop software. Measurement of root dentin translucency (TD length) was done on the scanned image by placing two guides (A and B) along the x-axis of ABFO NO. 2 scale. Unpaired t-test, regression analysis, and Pearson correlation coefficient were used as statistical tools. Results: A linear relationship was observed between TD length and age in the regression analysis. The Pearson correlation analysis showed that there was positive correlation (r = 0.52, P = 0.0001) between TD length and age. However, no significant (P > 0.05) difference was observed in the TD length between male (8.44 ± 2.92 mm) and female (7.80 ± 2.79 mm) samples. Conclusion: Translucency of the root dentin increases with age and it can be used as a reliable parameter for the age estimation. The method used here to digitally select and measure translucent root dentin is more refined, better correlated to age, and produce superior age estimation. PMID:28584476
PD5: a general purpose library for primer design software.
Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda
2013-01-01
Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
NASA Technical Reports Server (NTRS)
Leveson, Nancy
1987-01-01
Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.
The Software Management Environment (SME)
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Decker, William; Buell, John
1988-01-01
The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.
Accuracy of digital and analogue cephalometric measurements assessed with the sandwich technique.
Santoro, Margherita; Jarjoura, Karim; Cangialosi, Thomas J
2006-03-01
The purpose of the study was to evaluate the accuracy of cephalometric measurements obtained with digital tracing software compared with equivalent hand-traced measurements. In the sandwich technique, a storage phosphor plate and a conventional radiographic film are placed in the same cassette and exposed simultaneously. The method eliminates positioning errors and potential differences associated with multiple radiographic exposures that affected previous studies. It was used to ensure the equivalence of the digital images to the hard copy radiographs. Cephalometric measurements instead of landmarks were the focus of this investigation in order to acquire data with direct clinical applications. The sample consisted of digital and analog radiographic images from 47 patients after orthodontic treatment. Nine cephalometric landmarks were identified and 13 measurements calculated by 1 operator, both manually and with digital tracing software. Measurement error was assessed for each method by duplicating measurements of 25 randomly selected radiographs and by using Pearson's correlation coefficient. A paired t test was used to detect differences between the manual and digital methods. An overall greater variability in the digital cephalometric measurements was found. Differences between the 2 methods for SNA, ANB, S-Go:N-Me, U1/L1, L1-GoGn, and N-ANS:ANS-Me were statistically significant (P < .05). However, only the U1/L1 and S-Go:N-Me measurements showed differences greater than 2 SE (P < .0001). The 2 tracing methods provide similar clinical results; therefore, efficient digital cephalometric software can be reliably chosen as a routine diagnostic tool. The user-friendly sandwich technique was effective as an option for interoffice communications.
Software Technology for Adaptable, Reliable Systems (STARS)
1994-03-25
Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2
Virtual test: A student-centered software to measure student's critical thinking on human disease
NASA Astrophysics Data System (ADS)
Rusyati, Lilit; Firman, Harry
2016-02-01
The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
Analysis of methods of processing of expert information by optimization of administrative decisions
NASA Astrophysics Data System (ADS)
Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.
2018-03-01
In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.
Fleischer, Luise; Sehner, Susanne; Gehl, Axel; Riemer, Martin; Raupach, Tobias; Anders, Sven
2017-05-01
Measurement of postmortem pupil width is a potential component of death time estimation. However, no standardized measurement method has been described. We analyzed a total of 71 digital images for pupil-iris ratio using the software ImageJ. Images were analyzed three times by four different examiners. In addition, serial images from 10 cases were taken between 2 and 50 h postmortem to detect spontaneous pupil changes. Intra- and inter-rater reliability of the method was excellent (ICC > 0.95). The method is observer independent and yields consistent results, and images can be digitally stored and re-evaluated. The method seems highly eligible for forensic and scientific purposes. While statistical analysis of spontaneous pupil changes revealed a significant polynomial of quartic degree for postmortem time (p = 0.001), an obvious pattern was not detected. These results do not indicate suitability of spontaneous pupil changes for forensic death time estimation, as formerly suggested. © 2016 American Academy of Forensic Sciences.
van Vugt, Jeroen L A; Levolger, Stef; Gharbharan, Arvind; Koek, Marcel; Niessen, Wiro J; Burger, Jacobus W A; Willemsen, Sten P; de Bruin, Ron W F; IJzermans, Jan N M
2017-04-01
The association between body composition (e.g. sarcopenia or visceral obesity) and treatment outcomes, such as survival, using single-slice computed tomography (CT)-based measurements has recently been studied in various patient groups. These studies have been conducted with different software programmes, each with their specific characteristics, of which the inter-observer, intra-observer, and inter-software correlation are unknown. Therefore, a comparative study was performed. Fifty abdominal CT scans were randomly selected from 50 different patients and independently assessed by two observers. Cross-sectional muscle area (CSMA, i.e. rectus abdominis, oblique and transverse abdominal muscles, paraspinal muscles, and the psoas muscle), visceral adipose tissue area (VAT), and subcutaneous adipose tissue area (SAT) were segmented by using standard Hounsfield unit ranges and computed for regions of interest. The inter-software, intra-observer, and inter-observer agreement for CSMA, VAT, and SAT measurements using FatSeg, OsiriX, ImageJ, and sliceOmatic were calculated using intra-class correlation coefficients (ICCs) and Bland-Altman analyses. Cohen's κ was calculated for the agreement of sarcopenia and visceral obesity assessment. The Jaccard similarity coefficient was used to compare the similarity and diversity of measurements. Bland-Altman analyses and ICC indicated that the CSMA, VAT, and SAT measurements between the different software programmes were highly comparable (ICC 0.979-1.000, P < 0.001). All programmes adequately distinguished between the presence or absence of sarcopenia (κ = 0.88-0.96 for one observer and all κ = 1.00 for all comparisons of the other observer) and visceral obesity (all κ = 1.00). Furthermore, excellent intra-observer (ICC 0.999-1.000, P < 0.001) and inter-observer (ICC 0.998-0.999, P < 0.001) agreement for all software programmes were found. Accordingly, excellent Jaccard similarity coefficients were found for all comparisons (mean ≥ 0.964). FatSeg, OsiriX, ImageJ, and sliceOmatic showed an excellent agreement for CSMA, VAT, and SAT measurements on abdominal CT scans. Furthermore, excellent inter-observer and intra-observer agreement were achieved. Therefore, results of studies using these different software programmes can reliably be compared. © 2016 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.
Levolger, Stef; Gharbharan, Arvind; Koek, Marcel; Niessen, Wiro J.; Burger, Jacobus W.A.; Willemsen, Sten P.; de Bruin, Ron W.F.
2016-01-01
Abstract Background The association between body composition (e.g. sarcopenia or visceral obesity) and treatment outcomes, such as survival, using single‐slice computed tomography (CT)‐based measurements has recently been studied in various patient groups. These studies have been conducted with different software programmes, each with their specific characteristics, of which the inter‐observer, intra‐observer, and inter‐software correlation are unknown. Therefore, a comparative study was performed. Methods Fifty abdominal CT scans were randomly selected from 50 different patients and independently assessed by two observers. Cross‐sectional muscle area (CSMA, i.e. rectus abdominis, oblique and transverse abdominal muscles, paraspinal muscles, and the psoas muscle), visceral adipose tissue area (VAT), and subcutaneous adipose tissue area (SAT) were segmented by using standard Hounsfield unit ranges and computed for regions of interest. The inter‐software, intra‐observer, and inter‐observer agreement for CSMA, VAT, and SAT measurements using FatSeg, OsiriX, ImageJ, and sliceOmatic were calculated using intra‐class correlation coefficients (ICCs) and Bland–Altman analyses. Cohen's κ was calculated for the agreement of sarcopenia and visceral obesity assessment. The Jaccard similarity coefficient was used to compare the similarity and diversity of measurements. Results Bland–Altman analyses and ICC indicated that the CSMA, VAT, and SAT measurements between the different software programmes were highly comparable (ICC 0.979–1.000, P < 0.001). All programmes adequately distinguished between the presence or absence of sarcopenia (κ = 0.88–0.96 for one observer and all κ = 1.00 for all comparisons of the other observer) and visceral obesity (all κ = 1.00). Furthermore, excellent intra‐observer (ICC 0.999–1.000, P < 0.001) and inter‐observer (ICC 0.998–0.999, P < 0.001) agreement for all software programmes were found. Accordingly, excellent Jaccard similarity coefficients were found for all comparisons (mean ≥ 0.964). Conclusions FatSeg, OsiriX, ImageJ, and sliceOmatic showed an excellent agreement for CSMA, VAT, and SAT measurements on abdominal CT scans. Furthermore, excellent inter‐observer and intra‐observer agreement were achieved. Therefore, results of studies using these different software programmes can reliably be compared. PMID:27897414
Test-retest reliability of 3D ultrasound measurements of the thoracic spine.
Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian
2012-05-01
To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
75 FR 4375 - Transmission Loading Relief Reliability Standard and Curtailment Priorities
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-27
... Site: http://www.ferc.gov . Documents created electronically using word processing software should be... ensure operation within acceptable reliability criteria. NERC Glossary of Terms Used in Reliability Standards at 19, available at http://www.nerc.com/files/Glossary_12Feb08.pdf (NERC Glossary). An...
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.
Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P
2016-09-01
To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fayed, Nicolas; Modrego, Pedro J; Medrano, Jaime
2009-06-01
Reproducibility is an essential strength of any diagnostic technique for cross-sectional and longitudinal works. To determine in vivo short-term comparatively, the test-retest reliability of magnetic resonance spectroscopy (MRS) of the brain was compared using the manufacturer's software package and the widely used linear combination of model (LCModel) technique. Single-voxel H-MRS was performed in a series of patients with different pathologies on a 1.5 T clinical scanner. Four areas of the brain were explored with the point resolved spectroscopy technique acquisition mode; the echo time was 35 milliseconds and the repetition time was 2000 milliseconds. We enrolled 15 patients for every area, and the intra-individual variations of metabolites were studied in two consecutive scans without removing the patient from the scanner. Curve fitting and analysis of metabolites were made with the software of GE and the LCModel. Spectra non-fulfilling the minimum criteria of quality in relation to linewidths and signal/noise ratio were rejected. The intraclass correlation coefficients for the N-acetylaspartate/creatine (NAA/Cr) ratios were 0.93, 0.89, 0.9 and 0.8 for the posterior cingulate gyrus, occipital, prefrontal and temporal regions, respectively, with the GE software. For the LCModel, the coefficients were 0.9, 0.89, 0.87 and 0.84, respectively. For the absolute value of NAA, the GE software was also slightly more reproducible than LCModel. However, for the choline/Cr and myo-inositol/Cr ratios, the LCModel was more reliable than the GE software. The variability we have seen hovers around the percentages observed in previous reports (around 10% for the NAA/Cr ratios). We did not find that the LCModel software is superior to the software of the manufacturer. Reproducibility of metabolite values relies more on the observance of the quality parameters than on the software used.
Bower, Kelly J.; McGinley, Jennifer L.; Miller, Kimberly J.; Clark, Ross A.
2014-01-01
Background and Objectives The Wii Balance Board (WBB) is a globally accessible device that shows promise as a clinically useful balance assessment tool. Although the WBB has been found to be comparable to a laboratory-grade force platform for obtaining centre of pressure data, it has not been comprehensively studied in clinical populations. The aim of this study was to investigate the measurement properties of tests utilising the WBB in people after stroke. Methods Thirty individuals who were more than three months post-stroke and able to stand unsupported were recruited from a single outpatient rehabilitation facility. Participants performed standardised assessments incorporating the WBB and customised software (static stance with eyes open and closed, static weight-bearing asymmetry, dynamic mediolateral weight shifting and dynamic sit-to-stand) in addition to commonly employed clinical tests (10 Metre Walk Test, Timed Up and Go, Step Test and Functional Reach) on two testing occasions one week apart. Test-retest reliability and construct validity of the WBB tests were investigated. Results All WBB-based outcomes were found to be highly reliable between testing occasions (ICC = 0.82 to 0.98). Correlations were poor to moderate between WBB variables and clinical tests, with the strongest associations observed between task-related activities, such as WBB mediolateral weight shifting and the Step Test. Conclusions The WBB, used with customised software, is a reliable and potentially useful tool for the assessment of balance and weight-bearing asymmetry following stroke. Future research is recommended to further investigate validity and responsiveness. PMID:25541939
Bower, Kelly J; McGinley, Jennifer L; Miller, Kimberly J; Clark, Ross A
2014-01-01
The Wii Balance Board (WBB) is a globally accessible device that shows promise as a clinically useful balance assessment tool. Although the WBB has been found to be comparable to a laboratory-grade force platform for obtaining centre of pressure data, it has not been comprehensively studied in clinical populations. The aim of this study was to investigate the measurement properties of tests utilising the WBB in people after stroke. Thirty individuals who were more than three months post-stroke and able to stand unsupported were recruited from a single outpatient rehabilitation facility. Participants performed standardised assessments incorporating the WBB and customised software (static stance with eyes open and closed, static weight-bearing asymmetry, dynamic mediolateral weight shifting and dynamic sit-to-stand) in addition to commonly employed clinical tests (10 Metre Walk Test, Timed Up and Go, Step Test and Functional Reach) on two testing occasions one week apart. Test-retest reliability and construct validity of the WBB tests were investigated. All WBB-based outcomes were found to be highly reliable between testing occasions (ICC = 0.82 to 0.98). Correlations were poor to moderate between WBB variables and clinical tests, with the strongest associations observed between task-related activities, such as WBB mediolateral weight shifting and the Step Test. The WBB, used with customised software, is a reliable and potentially useful tool for the assessment of balance and weight-bearing asymmetry following stroke. Future research is recommended to further investigate validity and responsiveness.
A novel method to measure femoral component migration by computed tomography: a cadaver study.
Boettner, Friedrich; Sculco, Peter; Lipman, Joseph; Renner, Lisa; Faschingbauer, Martin
2016-06-01
Radiostereometric analysis (RSA) is the most accurate technique to measure implant migration. However, it requires special equipment, technical expertise and analysis software and has not gained wide acceptance. The current paper analyzes a novel method to measure implant migration utilizing widely available computer tomography (CT). Three uncemented total hip replacements were performed in three human cadavers and six tantalum beads were inserted into the femoral bone similar to RSA. Six different 28 mm heads (-3, 0, 2.5, 5.0, 7.5 and 10 mm) were added to simulate five reproducible translations (maximum total point migration) of the center of the head. Implant migration was measured in a 3-D analysis software (Geomagic Studio 7). Repeat manual reconstructions of the center of the head were performed by two investigators to determine repeatability and accuracy. The accuracy of measurements between the centers of two head sizes was 0.11 mm with a CI 95 % of 0.22 mm. The intra-observer repeatability was 0.13 mm (CI 95 % 0.25 mm). The interrater-reliability was 0.943. CT based measurement of head displacement in a cadaver model were highly accurate and reproducible.
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
Improvement of Computer Software Quality through Software Automated Tools.
1986-08-31
requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry
2014-08-01
technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had
The research and practice of spacecraft software engineering
NASA Astrophysics Data System (ADS)
Chen, Chengxin; Wang, Jinghua; Xu, Xiaoguang
2017-06-01
In order to ensure the safety and reliability of spacecraft software products, it is necessary to execute engineering management. Firstly, the paper introduces the problems of unsystematic planning, uncertain classified management and uncontinuous improved mechanism in domestic and foreign spacecraft software engineering management. Then, it proposes a solution for software engineering management based on system-integrated ideology in the perspective of spacecraft system. Finally, a application result of spacecraft is given as an example. The research can provides a reference for executing spacecraft software engineering management and improving software product quality.
NASA Technical Reports Server (NTRS)
Simmons, D. B.; Marchbanks, M. P., Jr.; Quick, M. J.
1982-01-01
The results of an effort to thoroughly and objectively analyze the statistical and historical information gathered during the development of the Shuttle Orbiter Primary Flight Software are given. The particular areas of interest include cost of the software, reliability of the software, requirements for the software and how the requirements changed during development of the system. Data related to the current version of the software system produced some interesting results. Suggestions are made for the saving of additional data which will allow additional investigation.
The reliable multicast protocol application programming interface
NASA Technical Reports Server (NTRS)
Montgomery , Todd; Whetten, Brian
1995-01-01
The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás
2012-10-01
For many decades, visualization and evaluation of three-dimensional (3D) spinal deformities have only been possible by two-dimensional (2D) radiodiagnostic methods, and as a result, characterization and classification were based on 2D terminologies. Recent developments in medical digital imaging and 3D visualization techniques including surface 3D reconstructions opened a chance for a long-sought change in this field. Supported by a 3D Terminology on Spinal Deformities of the Scoliosis Research Society, an approach for 3D measurements and a new 3D classification of scoliosis yielded several compelling concepts on 3D visualization and new proposals for 3D classification in recent years. More recently, a new proposal for visualization and complete 3D evaluation of the spine by 3D vertebra vectors has been introduced by our workgroup, a concept, based on EOS 2D/3D, a groundbreaking new ultralow radiation dose integrated orthopedic imaging device with sterEOS 3D spine reconstruction software. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and vertebra vector-based 3D measurements in a routine clinical setting. Retrospective, nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4 and 117.5°. Analysis of accuracy and reliability of measurements was carried out on a group of all patients and in subgroups based on coronal plane deviation: 0 to 10° (Group 1; n=36), 10 to 25° (Group 2; n=25), 25 to 50° (Group 3; n=69), 50 to 75° (Group 4; n=49), and above 75° (Group 5; n=22). All study subjects were examined by EOS 2D imaging, resulting in anteroposterior (AP) and lateral (LAT) full spine, orthogonal digital X-ray images, in standing position. Conventional coronal and sagittal curvature measurements including sagittal L5 vertebra wedges were determined by 3 experienced examiners, using traditional Cobb methods on EOS 2D AP and LAT images. Vertebra vector-based measurements were performed as published earlier, based on computer-assisted calculations of corresponding spinal curvature. Vertebra vectors were generated by dedicated software from sterEOS 3D spine models reconstructed from EOS 2D images by the same three examiners. Manual measurements were performed by each examiner, thrice for sterEOS 3D reconstructions and twice for vertebra vector-based measurements. Means comparison t test, Pearson bivariate correlation analysis, reliability analysis by intraclass correlation coefficients for intraobserver reproducibility and interrater reliability were performed using SPSS v16.0 software. In comparison with manual 2D methods, only small and nonsignificant differences were detectable in vertebra vector-based curvature data for coronal curves and thoracic kyphosis, whereas the found difference in L1-L5 lordosis values was shown to be strongly related to the magnitude of corresponding L5 wedge. Intraobserver reliability was excellent for both methods, and interrater reproducibility was consistently higher for vertebra vector-based methods that was also found to be unaffected by the magnitude of coronal curves or sagittal plane deviations. Vertebra vector-based angulation measurements could fully substitute conventional manual 2D measurements, with similar accuracy and higher intraobserver reliability and interrater reproducibility. Vertebra vectors represent a truly 3D solution for clear and comprehensible 3D visualization of spinal deformities while preserving crucial parametric information for vertebral size, 3D position, orientation, and rotation. The concept of vertebra vectors may serve as a starting point to a valid and clinically useful alternative for a new 3D classification of scoliosis. Copyright © 2012 Elsevier Inc. All rights reserved.
Multiple-Parameter, Low-False-Alarm Fire-Detection Systems
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Greensburg, Paul; McKnight, Robert; Xu, Jennifer C.; Liu, C. C.; Dutta, Prabir; Makel, Darby; Blake, D.; Sue-Antillio, Jill
2007-01-01
Fire-detection systems incorporating multiple sensors that measure multiple parameters are being developed for use in storage depots, cargo bays of ships and aircraft, and other locations not amenable to frequent, direct visual inspection. These systems are intended to improve upon conventional smoke detectors, now used in such locations, that reliably detect fires but also frequently generate false alarms: for example, conventional smoke detectors based on the blockage of light by smoke particles are also affected by dust particles and water droplets and, thus, are often susceptible to false alarms. In contrast, by utilizing multiple parameters associated with fires, i.e. not only obscuration by smoke particles but also concentrations of multiple chemical species that are commonly generated in combustion, false alarms can be significantly decreased while still detecting fires as reliably as older smoke-detector systems do. The present development includes fabrication of sensors that have, variously, micrometer- or nanometer-sized features so that such multiple sensors can be integrated into arrays that have sizes, weights, and power demands smaller than those of older macroscopic sensors. The sensors include resistors, electrochemical cells, and Schottky diodes that exhibit different sensitivities to the various airborne chemicals of interest. In a system of this type, the sensor readings are digitized and processed by advanced signal-processing hardware and software to extract such chemical indications of fires as abnormally high concentrations of CO and CO2, possibly in combination with H2 and/or hydrocarbons. The system also includes a microelectromechanical systems (MEMS)-based particle detector and classifier device to increase the reliability of measurements of chemical species and particulates. In parallel research, software for modeling the evolution of a fire within an aircraft cargo bay has been developed. The model implemented in the software can describe the concentrations of chemical species and of particulate matter as functions of time. A system of the present developmental type and a conventional fire detector were tested under both fire and false-alarm conditions in a Federal Aviation Administration cargo-compartment- testing facility. Both systems consistently detected fires. However, the conventional fire detector consistently generated false alarms, whereas the developmental system did not generate any false alarms.
Aslam, Tariq M; Tahir, Humza J; Parry, Neil R A; Murray, Ian J; Kwak, Kun; Heyes, Richard; Salleh, Mahani M; Czanner, Gabriela; Ashworth, Jane
2016-10-01
To report on the utility of a computer tablet-based method for automated testing of visual acuity in children based on the principles of game design. We describe the testing procedure and present repeatability as well as agreement of the score with accepted visual acuity measures. Reliability and validity study. Setting: Manchester Royal Eye Hospital Pediatric Ophthalmology Outpatients Department. Total of 112 sequentially recruited patients. For each patient 1 eye was tested with the Mobile Assessment of Vision by intERactIve Computer for Children (MAVERIC-C) system, consisting of a software application running on a computer tablet, housed in a bespoke viewing chamber. The application elicited touch screen responses using a game design to encourage compliance and automatically acquire visual acuity scores of participating patients. Acuity was then assessed by an examiner with a standard chart-based near ETDRS acuity test before the MAVERIC-C assessment was repeated. Reliability of MAVERIC-C near visual acuity score and agreement of MAVERIC-C score with near ETDRS chart for visual acuity. Altogether, 106 children (95%) completed the MAVERIC-C system without assistance. The vision scores demonstrated satisfactory reliability, with test-retest VA scores having a mean difference of 0.001 (SD ±0.136) and limits of agreement of 2 SD (LOA) of ±0.267. Comparison with the near EDTRS chart showed agreement with a mean difference of -0.0879 (±0.106) with LOA of ±0.208. This study demonstrates promising utility for software using a game design to enable automated testing of acuity in children with ophthalmic disease in an objective and accurate manner. Copyright © 2016 Elsevier Inc. All rights reserved.
Computed gray levels in multislice and cone-beam computed tomography.
Azeredo, Fabiane; de Menezes, Luciane Macedo; Enciso, Reyes; Weissheimer, Andre; de Oliveira, Rogério Belle
2013-07-01
Gray level is the range of shades of gray in the pixels, representing the x-ray attenuation coefficient that allows for tissue density assessments in computed tomography (CT). An in-vitro study was performed to investigate the relationship between computed gray levels in 3 cone-beam CT (CBCT) scanners and 1 multislice spiral CT device using 5 software programs. Six materials (air, water, wax, acrylic, plaster, and gutta-percha) were scanned with the CBCT and CT scanners, and the computed gray levels for each material at predetermined points were measured with OsiriX Medical Imaging software (Geneva, Switzerland), OnDemand3D (CyberMed International, Seoul, Korea), E-Film (Merge Healthcare, Milwaukee, Wis), Dolphin Imaging (Dolphin Imaging & Management Solutions, Chatsworth, Calif), and InVivo Dental Software (Anatomage, San Jose, Calif). The repeatability of these measurements was calculated with intraclass correlation coefficients, and the gray levels were averaged to represent each material. Repeated analysis of variance tests were used to assess the differences in gray levels among scanners and materials. There were no differences in mean gray levels with the different software programs. There were significant differences in gray levels between scanners for each material evaluated (P <0.001). The software programs were reliable and had no influence on the CT and CBCT gray level measurements. However, the gray levels might have discrepancies when different CT and CBCT scanners are used. Therefore, caution is essential when interpreting or evaluating CBCT images because of the significant differences in gray levels between different CBCT scanners, and between CBCT and CT values. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Building a Library Web Server on a Budget.
ERIC Educational Resources Information Center
Orr, Giles
1998-01-01
Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)
Enhanced CARES Software Enables Improved Ceramic Life Prediction
NASA Technical Reports Server (NTRS)
Janosik, Lesley A.
1997-01-01
The NASA Lewis Research Center has developed award-winning software that enables American industry to establish the reliability and life of brittle material (e.g., ceramic, intermetallic, graphite) structures in a wide variety of 21st century applications. The CARES (Ceramics Analysis and Reliability Evaluation of Structures) series of software is successfully used by numerous engineers in industrial, academic, and government organizations as an essential element of the structural design and material selection processes. The latest version of this software, CARES/Life, provides a general- purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. CARES/Life was recently enhanced by adding new modules designed to improve functionality and user-friendliness. In addition, a beta version of the newly-developed CARES/Creep program (for determining the creep life of monolithic ceramic components) has just been released to selected organizations.
Expert system verification and validation study. Delivery 3A and 3B: Trip summaries
NASA Technical Reports Server (NTRS)
French, Scott
1991-01-01
Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.
A real-time monitoring system for the facial nerve.
Prell, Julian; Rachinger, Jens; Scheller, Christian; Alfieri, Alex; Strauss, Christian; Rampp, Stefan
2010-06-01
Damage to the facial nerve during surgery in the cerebellopontine angle is indicated by A-trains, a specific electromyogram pattern. These A-trains can be quantified by the parameter "traintime," which is reliably correlated with postoperative functional outcome. The system presented was designed to monitor traintime in real-time. A dedicated hardware and software platform for automated continuous analysis of the intraoperative facial nerve electromyogram was specifically designed. The automatic detection of A-trains is performed by a software algorithm for real-time analysis of nonstationary biosignals. The system was evaluated in a series of 30 patients operated on for vestibular schwannoma. A-trains can be detected and measured automatically by the described method for real-time analysis. Traintime is monitored continuously via a graphic display and is shown as an absolute numeric value during the operation. It is an expression of overall, cumulated length of A-trains in a given channel; a high correlation between traintime as measured by real-time analysis and functional outcome immediately after the operation (Spearman correlation coefficient [rho] = 0.664, P < .001) and in long-term outcome (rho = 0.631, P < .001) was observed. Automated real-time analysis of the intraoperative facial nerve electromyogram is the first technique capable of reliable continuous real-time monitoring. It can critically contribute to the estimation of functional outcome during the course of the operative procedure.
Design of a developmental dual fail operational redundant strapped down inertial measurement unit
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Russell, J. G.
1980-01-01
An experimental redundant strap-down inertial measurement unit (RSDIMU) is being developed at NASA-Langley as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit consists of four two-degrees-of-freedom (TDOF) tuned-rotor gyros, and four TDOF pendulous accelerometers in a skewed and separable semi-octahedron array. The system will be used to examine failure detection and isolation techniques, redundancy management rules, and optimal threshold levels for various flight configurations. The major characteristics of the RSDIMU hardware and software design, and its use as a research tool are described.
Tolerancing aspheres based on manufacturing knowledge
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Kokot, S.; Fuchs, U.
2017-10-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Tolerancing aspheres based on manufacturing statistics
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Möhl, A.; Fuchs, U.
2017-11-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Improving the Effectiveness of Program Managers
2006-05-03
Improving the Effectiveness of Program Managers Systems and Software Technology Conference Salt Lake City, Utah May 3, 2006 Presented by GAO’s...Companies’ best practices Motorola Caterpillar Toyota FedEx NCR Teradata Boeing Hughes Space and Communications Disciplined software and management...and total ownership costs Collection of metrics data to improve software reliability Technology readiness levels and design maturity Statistical
Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás
2014-02-01
Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.
Reliability of a smartphone-based goniometer for knee joint goniometry.
Ferriero, Giorgio; Vercelli, Stefano; Sartorio, Francesco; Muñoz Lasa, Susana; Ilieva, Elena; Brigatti, Elisa; Ruella, Carolina; Foti, Calogero
2013-06-01
The aim of this study was to assess the reliability of a smartphone-based application developed for photographic-based goniometry, DrGoniometer (DrG), by comparing its measurement of the knee joint angle with that made by a universal goniometer (UG). Joint goniometry is a common mode of clinical assessment used in many disciplines, in particular in rehabilitation. One validated method is photographic-based goniometry, but the procedure is usually complex: the image has to be downloaded from the camera to a computer and then edited using dedicated software. This disadvantage may be overcome by the new generation of mobile phones (smartphones) that have computer-like functionality and an integrated digital camera. This validation study was carried out under two different controlled conditions: (i) with the participant to measure in a fixed position and (ii) with a battery of pictures to assess. In the first part, four raters performed repeated measurements with DrG and UG at different knee joint angles. Then, 10 other raters measured the knee at different flexion angles ranging 20-145° on a battery of 35 pictures taken in a clinical setting. The results showed that inter-rater and intra-rater correlations were always more than 0.958. Agreement with the UG showed a width of 18.2° [95% limits of agreement (LoA)=-7.5/+10.7°] and 14.1° (LoA=-6.6/+7.5°). In conclusion, DrG seems to be a reliable method for measuring knee joint angle. This mHealth application can be an alternative/additional method of goniometry, easier to use than other photographic-based goniometric assessments. Further studies are required to assess its reliability for the measurement of other joints.
Software Implements a Space-Mission File-Transfer Protocol
NASA Technical Reports Server (NTRS)
Rundstrom, Kathleen; Ho, Son Q.; Levesque, Michael; Sanders, Felicia; Burleigh, Scott; Veregge, John
2004-01-01
CFDP is a computer program that implements the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol, which is an international standard for automatic, reliable transfers of files of data between locations on Earth and in outer space. CFDP administers concurrent file transfers in both directions, delivery of data out of transmission order, reliable and unreliable transmission modes, and automatic retransmission of lost or corrupted data by use of one or more of several lost-segment-detection modes. The program also implements several data-integrity measures, including file checksums and optional cyclic redundancy checks for each protocol data unit. The metadata accompanying each file can include messages to users application programs and commands for operating on remote file systems.
In-Vivo Human Skin to Textiles Friction Measurements
NASA Astrophysics Data System (ADS)
Pfarr, Lukas; Zagar, Bernhard
2017-10-01
We report on a measurement system to determine highly reliable and accurate friction properties of textiles as needed for example as input to garment simulation software. Our investigations led to a set-up that allows to characterize not just textile to textile but also textile to in-vivo human skin tribological properties and thus to fundamental knowledge about genuine wearer interaction in garments. The method of test conveyed in this paper is measuring concurrently and in a highly time resolved manner the normal force as well as the resulting shear force caused by a friction subject intending to slide out of the static friction regime and into the dynamic regime on a test bench. Deeper analysis of various influences is enabled by extending the simple model following Coulomb's law for rigid body friction to include further essential parameters such as contact force, predominance in the yarn's orientation and also skin hydration. This easy-to-use system enables to measure reliably and reproducibly both static and dynamic friction for a variety of friction partners including human skin with all its variability there might be.
Axial linear patellar displacement: a new measurement of patellofemoral congruence.
Urch, Scott E; Tritle, Benjamin A; Shelbourne, K Donald; Gray, Tinker
2009-05-01
The tools for measuring the congruence angle with digital radiography software can be difficult to use; therefore, the authors sought to develop a new, easy, and reliable method for measuring patellofemoral congruence. The abstract goes here and covers two columns. The abstract goes The linear displacement measurement will correlate well with the congruence angle measurement. here and covers two columns. Cohort study (diagnosis); Level of evidence, 2. On Merchant view radiographs obtained digitally, the authors measured the congruence angle and a new linear displacement measurement on preoperative and postoperative radiographs of 31 patients who suffered unilateral patellar dislocations and 100 uninjured subjects. The linear displacement measurement was obtained by drawing a reference line across the medial and lateral trochlear facets. Perpendicular lines were drawn from the depth of the sulcus through the reference line and from the apex of the posterior tip of the patella through the reference line. The distance between the perpendicular lines was the linear displacement measurement. The measurements were obtained twice at different sittings. The observer was blinded as to the previous measurements to establish reliability. Measurements were compared to determine whether the linear displacement measurement correlated with congruence angle. Intraobserver reliability was above r(2) = .90 for all measurements. In patients with patellar dislocations, the mean congruence angle preoperatively was 33.5 degrees , compared with 12.1 mm for linear displacement (r(2) = .92). The mean congruence angle postoperatively was 11.2 degrees, compared with 4.0 mm for linear displacement (r(2) = .89). For normal subjects, the mean congruence angle was -3 degrees and the mean linear displacement was 0.2 mm. The linear displacement measurement was found to correlate with congruence angle measurements and may be an easy and useful tool for clinicians to evaluate patellofemoral congruence objectively.
New method of noncontact temperature measurement in on-line textile production
NASA Astrophysics Data System (ADS)
Cheng, Xianping; Song, Xing-Li; Deng, Xing-Zhong
1993-09-01
Based on the condition of textile production the method of infrared non-contact temperature measurement is adcpted in the heat-setting and drying heat-treatment process . This method is used to monitor the moving cloth. The temperature of the cloth is displayed rapidly and exactly. The principle of the temperature measurement is analysed theoretically in this paper. Mathematical analysis and calculation are used for introducing signal transmitting method. Adopted method of combining software with hardware the temperature is corrected and compensated with the aid of a single-chip microcomputer. The results of test indicate that the application of temperature measurement instrument provides reliable parameters in the quality control. And it is an important measure on improving the quality of products.
Vasak, Christoph; Watzak, Georg; Gahleitner, André; Strbac, Georg; Schemper, Michael; Zechner, Werner
2011-10-01
This prospective study was intended to evaluate the overall deviation in a clinical treatment setting to provide for quantification of the potential impairment of treatment safety and reliability with computer-assisted, template-guided transgingival implantation. The patient population enrolled (male/female=10/8) presented with partially dentate and edentulous maxillae and mandibles. Overall, 86 implants were placed by two experienced dental surgeons strictly following the NobelGuide™ protocol for template-guided implantation. All patients had a postoperative computed tomography (CT) with identical settings to the preoperative examination. Using the triple scan technique, pre- and postoperative CT data were merged in the Procera planning software, a newly developed procedure - initially presented in 2007 allowing measurement of the deviations at implant shoulder and apex. The deviations measured were an average of 0.43 mm (bucco-lingual), 0.46 mm (mesio-distal) and 0.53 mm (depth) at the level of the implant shoulder and slightly higher at the implant apex with an average of 0.7 mm (bucco-lingual), 0.63 mm (mesio-distal) and 0.52 mm (depth). The maximum deviation of 2.02 mm was encountered in the corono-apical direction. Significantly lower deviations were seen for implants in the anterior region vs. the posterior tooth region (P<0.01, 0.31 vs. 0.5 mm), and deviations were also significantly lower in the mandible than in the maxilla (P=0.04, 0.36 vs. 0.45 mm) in the mesio-distal direction. Moreover, a significant correlation between deviation and mucosal thickness was seen and a learning effect was found over the time period of performance of the surgical procedures. Template-guided implantation will ensure reliable transfer of preoperative computer-assisted planning into surgical practice. With regard to the required verification of treatment reliability of an implantation system with flapless access, all maximum deviations measured in this clinical study were within the safety margins recommended by the planning software. © 2011 John Wiley & Sons A/S.
Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T
2017-05-08
It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral, distal femoral, acetabular) were identified.
1988-12-01
software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value
MYRaf: A new Approach with IRAF for Astronomical Photometric Reduction
NASA Astrophysics Data System (ADS)
Kilic, Y.; Shameoni Niaei, M.; Özeren, F. F.; Yesilyaprak, C.
2016-12-01
In this study, the design and some developments of MYRaf software for astronomical photometric reduction are presented. MYRaf software is an easy to use, reliable, and has a fast IRAF aperture photometry GUI tools. MYRaf software is an important step for the automated software process of robotic telescopes, and uses IRAF, PyRAF, matplotlib, ginga, alipy, and Sextractor with the general-purpose and high-level programming language Python and uses the QT framework.
Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.
Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699
Glioblastoma Segmentation: Comparison of Three Different Software Packages.
Fyllingen, Even Hovig; Stensjøen, Anne Line; Berntsen, Erik Magnus; Solheim, Ole; Reinertsen, Ingerid
2016-01-01
To facilitate a more widespread use of volumetric tumor segmentation in clinical studies, there is an urgent need for reliable, user-friendly segmentation software. The aim of this study was therefore to compare three different software packages for semi-automatic brain tumor segmentation of glioblastoma; namely BrainVoyagerTM QX, ITK-Snap and 3D Slicer, and to make data available for future reference. Pre-operative, contrast enhanced T1-weighted 1.5 or 3 Tesla Magnetic Resonance Imaging (MRI) scans were obtained in 20 consecutive patients who underwent surgery for glioblastoma. MRI scans were segmented twice in each software package by two investigators. Intra-rater, inter-rater and between-software agreement was compared by using differences of means with 95% limits of agreement (LoA), Dice's similarity coefficients (DSC) and Hausdorff distance (HD). Time expenditure of segmentations was measured using a stopwatch. Eighteen tumors were included in the analyses. Inter-rater agreement was highest for BrainVoyager with difference of means of 0.19 mL and 95% LoA from -2.42 mL to 2.81 mL. Between-software agreement and 95% LoA were very similar for the different software packages. Intra-rater, inter-rater and between-software DSC were ≥ 0.93 in all analyses. Time expenditure was approximately 41 min per segmentation in BrainVoyager, and 18 min per segmentation in both 3D Slicer and ITK-Snap. Our main findings were that there is a high agreement within and between the software packages in terms of small intra-rater, inter-rater and between-software differences of means and high Dice's similarity coefficients. Time expenditure was highest for BrainVoyager, but all software packages were relatively time-consuming, which may limit usability in an everyday clinical setting.
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Reliability and availability analysis of a 10 kW@20 K helium refrigerator
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.
2017-02-01
A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.
van den Berg, Thomas J T P; Franssen, Luuk; Kruijt, Bastiaan; Coppens, Joris E
2011-08-01
The current paper describes the design and population testing of a flicker sensitivity assessment technique corresponding to the psychophysical approach for straylight measurement. The purpose is twofold: to check the subjects' capability to perform the straylight test and as a test for retinal integrity for other purposes. The test was implemented in the Oculus C-Quant straylight meter, using homemade software (MATLAB). The geometry of the visual field lay-out was identical, as was the subjects' 2AFC task. A comparable reliability criterion ("unc") was developed. Outcome measure was logTCS (temporal contrast sensitivity). The population test was performed in science fair settings on about 400 subjects. Moreover, 2 subjects underwent extensive tests to check whether optical defects, mimicked with trial lenses and scatter filters, affected the TCS outcome. Repeated measures standard deviation was 0.11 log units for the reference population. Normal values for logTCS were around 2 (threshold 1%) with some dependence on age (range 6 to 85 years). The test outcome did not change upon a tenfold (optical) deterioration in visual acuity or straylight. The test has adequate precision for checking a subject's capability to perform straylight assessment. The unc reliability criterion ensures sufficient precision, also for assessment of retinal sensitivity loss.
Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control
NASA Technical Reports Server (NTRS)
Schumann, Johann; Mbaya, Timmy; Menghoel, Ole
2011-01-01
Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.
Measurement of interfacial tension by use of pendant drop video techniques
NASA Astrophysics Data System (ADS)
Herd, Melvin D.; Thomas, Charles P.; Bala, Gregory A.; Lassahn, Gordon D.
1993-09-01
This report describes an instrument to measure the interfacial tension (IFT) of aqueous surfactant solutions and crude oil. The method involves injection of a drop of fluid (such as crude oil) into a second immiscible phase to determine the IFT between the two phases. The instrument is composed of an AT-class computer, optical cell, illumination, video camera and lens, video frame digitizer board, monitor, and software. The camera displays an image of the pendant drop on the monitor, which is then processed by the frame digitizer board and non-proprietary software to determine the IFT. Several binary and ternary phase systems were taken from the literature and used to measure the precision and accuracy of the instrument in determining IFT's. A copy of the software program is included in the report. A copy of the program on diskette can be obtained from the Energy Science and Technology Software Center, P.O. Box 1020, Oak Ridge, TN 37831-1020. The accuracy and precision of the technique and apparatus presented is very good for measurement of IFT's in the range from 72 to 10(exp -2) mN/m, which is adequate for many EOR applications. With modifications to the equipment and the numerical techniques, measurements of ultralow IFT's (less than 10(exp -3) mN/m) should be possible as well as measurements at reservoir temperature and pressure conditions. The instrument has been used at the Idaho National Engineering Laboratory to support the research program on microbial enhanced oil recovery. Measurements of IFT's for several bacterial supernatants and unfractionated acid precipitates of microbial cultures containing biosurfactants against medium to heavy crude oils are reported. These experiments demonstrate that the use of automated video imaging of pendant drops is a simple and fast method to reliably determine interfacial tension between two immiscible liquid phases, or between a gas and a liquid phase.
Software Suite to Support In-Flight Characterization of Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Holekamp, Kara; Gasser, Gerald; Tabor, Wes; Vaughan, Ronald; Ryan, Robert; Pagnutti, Mary; Blonski, Slawomir; Kenton, Ross
2014-01-01
A characterization software suite was developed to facilitate NASA's in-flight characterization of commercial remote sensing systems. Characterization of aerial and satellite systems requires knowledge of ground characteristics, or ground truth. This information is typically obtained with instruments taking measurements prior to or during a remote sensing system overpass. Acquired ground-truth data, which can consist of hundreds of measurements with different data formats, must be processed before it can be used in the characterization. Accurate in-flight characterization of remote sensing systems relies on multiple field data acquisitions that are efficiently processed, with minimal error. To address the need for timely, reproducible ground-truth data, a characterization software suite was developed to automate the data processing methods. The characterization software suite is engineering code, requiring some prior knowledge and expertise to run. The suite consists of component scripts for each of the three main in-flight characterization types: radiometric, geometric, and spatial. The component scripts for the radiometric characterization operate primarily by reading the raw data acquired by the field instruments, combining it with other applicable information, and then reducing it to a format that is appropriate for input into MODTRAN (MODerate resolution atmospheric TRANsmission), an Air Force Research Laboratory-developed radiative transport code used to predict at-sensor measurements. The geometric scripts operate by comparing identified target locations from the remote sensing image to known target locations, producing circular error statistics defined by the Federal Geographic Data Committee Standards. The spatial scripts analyze a target edge within the image, and produce estimates of Relative Edge Response and the value of the Modulation Transfer Function at the Nyquist frequency. The software suite enables rapid, efficient, automated processing of ground truth data, which has been used to provide reproducible characterizations on a number of commercial remote sensing systems. Overall, this characterization software suite improves the reliability of ground-truth data processing techniques that are required for remote sensing system in-flight characterizations.
Application of Kingview and PLC in friction durability test system
NASA Astrophysics Data System (ADS)
Gao, Yinhan; Cui, Jing; Yang, Kaiyu; Ke, Hui; Song, Bing
2013-01-01
Using PLC and Kingview software, a friction durability test system is designed. The overall program, hardware configuration, software structure and monitoring interface are described in detail. PLC ensures the stability of data acquisition, and the KingView software makes the HMI easy to manipulate. The practical application shows that the proposed system is cheap, economical and highly reliable.
Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)
NASA Technical Reports Server (NTRS)
Benson, Markland
2008-01-01
The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.
Software Cuts Homebuilding Costs, Increases Energy Efficiency
NASA Technical Reports Server (NTRS)
2015-01-01
To sort out the best combinations of technologies for a crewed mission to Mars, NASA Headquarters awarded grants to MIT's Department of Aeronautics and Astronautics to develop an algorithm-based software tool that highlights the most reliable and cost-effective options. Utilizing the software, Professor Edward Crawley founded Cambridge, Massachussetts-based Ekotrope, which helps homebuilders choose cost- and energy-efficient floor plans and materials.
Design and Implementation of a Modern Automatic Deformation Monitoring System
NASA Astrophysics Data System (ADS)
Engel, Philipp; Schweimler, Björn
2016-03-01
The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Ellis, Richard; Hing, Wayne; Dilley, Andrew; McNair, Peter
2008-08-01
Diagnostic ultrasound provides a technique whereby real-time, in vivo analysis of peripheral nerve movement is possible. This study measured sciatic nerve movement during a "slider" neural mobilisation technique (ankle dorsiflexion/plantar flexion and cervical extension/flexion). Transverse and longitudinal movement was assessed from still ultrasound images and video sequences by using frame-by-frame cross-correlation software. Sciatic nerve movement was recorded in the transverse and longitudinal planes. For transverse movement, at the posterior midthigh (PMT) the mean value of lateral sciatic nerve movement was 3.54 mm (standard error of measurement [SEM] +/- 1.18 mm) compared with anterior-posterior/vertical (AP) movement of 1.61 mm (SEM +/- 0.78 mm). At the popliteal crease (PC) scanning location, lateral movement was 6.62 mm (SEM +/- 1.10 mm) compared with AP movement of 3.26 mm (SEM +/- 0.99 mm). Mean longitudinal sciatic nerve movement at the PMT was 3.47 mm (SEM +/- 0.79 mm; n = 27) compared with the PC of 5.22 mm (SEM +/- 0.05 mm; n = 3). The reliability of ultrasound measurement of transverse sciatic nerve movement was fair to excellent (Intraclass correlation coefficient [ICC] = 0.39-0.76) compared with excellent (ICC = 0.75) for analysis of longitudinal movement. Diagnostic ultrasound presents a reliable, noninvasive, real-time, in vivo method for analysis of sciatic nerve movement.
Improving a data-acquisition software system with abstract data type components
NASA Technical Reports Server (NTRS)
Howard, S. D.
1990-01-01
Abstract data types and object-oriented design are active research areas in computer science and software engineering. Much of the interest is aimed at new software development. Abstract data type packages developed for a discontinued software project were used to improve a real-time data-acquisition system under maintenance. The result saved effort and contributed to a significant improvement in the performance, maintainability, and reliability of the Goldstone Solar System Radar Data Acquisition System.
Kim, Kwang Baek; Kim, Chang Won
2015-01-01
Accurate measures of liver fat content are essential for investigating hepatic steatosis. For a noninvasive inexpensive ultrasonographic analysis, it is necessary to validate the quantitative assessment of liver fat content so that fully automated reliable computer-aided software can assist medical practitioners without any operator subjectivity. In this study, we attempt to quantify the hepatorenal index difference between the liver and the kidney with respect to the multiple severity status of hepatic steatosis. In order to do this, a series of carefully designed image processing techniques, including fuzzy stretching and edge tracking, are applied to extract regions of interest. Then, an unsupervised neural learning algorithm, the self-organizing map, is designed to establish characteristic clusters from the image, and the distribution of the hepatorenal index values with respect to the different levels of the fatty liver status is experimentally verified to estimate the differences in the distribution of the hepatorenal index. Such findings will be useful in building reliable computer-aided diagnostic software if combined with a good set of other characteristic feature sets and powerful machine learning classifiers in the future.
Kim, Kwang Baek
2015-01-01
Accurate measures of liver fat content are essential for investigating hepatic steatosis. For a noninvasive inexpensive ultrasonographic analysis, it is necessary to validate the quantitative assessment of liver fat content so that fully automated reliable computer-aided software can assist medical practitioners without any operator subjectivity. In this study, we attempt to quantify the hepatorenal index difference between the liver and the kidney with respect to the multiple severity status of hepatic steatosis. In order to do this, a series of carefully designed image processing techniques, including fuzzy stretching and edge tracking, are applied to extract regions of interest. Then, an unsupervised neural learning algorithm, the self-organizing map, is designed to establish characteristic clusters from the image, and the distribution of the hepatorenal index values with respect to the different levels of the fatty liver status is experimentally verified to estimate the differences in the distribution of the hepatorenal index. Such findings will be useful in building reliable computer-aided diagnostic software if combined with a good set of other characteristic feature sets and powerful machine learning classifiers in the future. PMID:26247023
Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
Functional description of the ISIS system
NASA Technical Reports Server (NTRS)
Berman, W. J.
1979-01-01
Development of software for avionic and aerospace applications (flight software) is influenced by a unique combination of factors which includes: (1) length of the life cycle of each project; (2) necessity for cooperation between the aerospace industry and NASA; (3) the need for flight software that is highly reliable; (4) the increasing complexity and size of flight software; and (5) the high quality of the programmers and the tightening of project budgets. The interactive software invocation system (ISIS) which is described is designed to overcome the problems created by this combination of factors.
A software upgrade method for micro-electronics medical implants.
Cao, Yang; Hao, Hongwei; Xue, Lin; Li, Luming; Ma, Bozhi
2006-01-01
A software upgrade method for micro-electronics medical implants is designed to enhance the devices' function or renew the software if there are some bugs found, the software updating or some memory units disabled. The implants needn't be replaced by operations if the faults can be corrected through reprogramming, which reduces the patients' pain and improves the safety effectively. This paper introduces the software upgrade method using in-application programming (IAP) and emphasizes how to insure the system, especially the implanted part's reliability and stability while upgrading.
From LPF to eLISA: new approach in payload software
NASA Astrophysics Data System (ADS)
Gesa, Ll.; Martin, V.; Conchillo, A.; Ortega, J. A.; Mateos, I.; Torrents, A.; Lopez-Zaragoza, J. P.; Rivas, F.; Lloro, I.; Nofrarias, M.; Sopuerta, CF.
2017-05-01
eLISA will be the first observatory in space to explore the Gravitational Universe. It will gather revolutionary information about the dark universe. This implies a robust and reliable embedded control software and hardware working together. With the lessons learnt with the LISA Pathfinder payload software as baseline, we will introduce in this short article the key concepts and new approaches that our group is working on in terms of software: multiprocessor, self-modifying-code strategies, 100% hardware and software monitoring, embedded scripting, Time and Space Partition among others.
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
NASA Astrophysics Data System (ADS)
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
NASA Astrophysics Data System (ADS)
Stojek, Zbigniew
The idea of imposing potential pulses and measuring the currents at the end of each pulse was proposed by Barker in a little-known journal as early as in 1958 [1]. However, the first reliable trouble-free and affordable polarographs offering voltammetric pulse techniques appeared on the market only in the 1970s. This delay was due to some limitations on the electronic side. In the 1990s, again substantial progress in electrochemical pulse instrumentation took place. This was related to the introduction of microprocessors, computers, and advanced software.
de Waal, Koert; Phad, Nilkant
2018-03-01
Two-dimensional speckle tracking echocardiography is an emerging technique for analyzing cardiac function in newborns. Strain is a highly reliable and reproducible parameter, and reference values have been established for term and preterm newborns. Its implementation into clinical practice has been slow, partly due to lack of inter-vendor consistency. Our aim was to compare recent versions of Philips and Tomtec speckle tracking software for deformation and semiautomated volume and area measurements in neonatal intensive care patients. Longitudinal and circumferential deformation and cavity dimensions (volume, area) were determined off line from apical and short-axis images in 50 consecutive newborns with a median birthweight of 760 g (range 460-3200 g). Absolute mean endocardial global longitudinal strain measurements were similar between vendors, but with wide limits of agreement (Philips -18.9 [2.1]%, Tomtec -18.6 [2.5]%, bias -0.3 [1.7]%, and limits of agreement -3.6%-3.1%). Longitudinal strain rate and circumferential measurements showed poor correlation. All volume and area measurements correlated well between the vendors, but with significant bias. Global longitudinal strain measurements compared well between vendors but wide limits of agreement, suggesting that longitudinal measurements are preferred using similar hardware and software. © 2017, Wiley Periodicals, Inc.
Design for Verification: Using Design Patterns to Build Reliable Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)
2003-01-01
Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.
NASA Astrophysics Data System (ADS)
Huang, Rong; Limburg, Karin; Rohtla, Mehis
2017-05-01
X-ray fluorescence computed tomography is often used to measure trace element distributions within low-Z samples, using algorithms capable of X-ray absorption correction when sample self-absorption is not negligible. Its reconstruction is more complicated compared to transmission tomography, and therefore not widely used. We describe in this paper a very practical iterative method that uses widely available transmission tomography reconstruction software for fluorescence tomography. With this method, sample self-absorption can be corrected not only for the absorption within the measured layer but also for the absorption by material beyond that layer. By combining tomography with analysis for scanning X-ray fluorescence microscopy, absolute concentrations of trace elements can be obtained. By using widely shared software, we not only minimized the coding, took advantage of computing efficiency of fast Fourier transform in transmission tomography software, but also thereby accessed well-developed data processing tools coming with well-known and reliable software packages. The convergence of the iterations was also carefully studied for fluorescence of different attenuation lengths. As an example, fish eye lenses could provide valuable information about fish life-history and endured environmental conditions. Given the lens's spherical shape and sometimes the short distance from sample to detector for detecting low concentration trace elements, its tomography data are affected by absorption related to material beyond the measured layer but can be reconstructed well with our method. Fish eye lens tomography results are compared with sliced lens 2D fluorescence mapping with good agreement, and with tomography providing better spatial resolution.
Reliability of Semi-Automated Segmentations in Glioblastoma.
Huber, T; Alber, G; Bette, S; Boeckh-Behrens, T; Gempt, J; Ringel, F; Alberts, E; Zimmer, C; Bauer, J S
2017-06-01
In glioblastoma, quantitative volumetric measurements of contrast-enhancing or fluid-attenuated inversion recovery (FLAIR) hyperintense tumor compartments are needed for an objective assessment of therapy response. The aim of this study was to evaluate the reliability of a semi-automated, region-growing segmentation tool for determining tumor volume in patients with glioblastoma among different users of the software. A total of 320 segmentations of tumor-associated FLAIR changes and contrast-enhancing tumor tissue were performed by different raters (neuroradiologists, medical students, and volunteers). All patients underwent high-resolution magnetic resonance imaging including a 3D-FLAIR and a 3D-MPRage sequence. Segmentations were done using a semi-automated, region-growing segmentation tool. Intra- and inter-rater-reliability were addressed by intra-class-correlation (ICC). Root-mean-square error (RMSE) was used to determine the precision error. Dice score was calculated to measure the overlap between segmentations. Semi-automated segmentation showed a high ICC (> 0.985) for all groups indicating an excellent intra- and inter-rater-reliability. Significant smaller precision errors and higher Dice scores were observed for FLAIR segmentations compared with segmentations of contrast-enhancement. Single rater segmentations showed the lowest RMSE for FLAIR of 3.3 % (MPRage: 8.2 %). Both, single raters and neuroradiologists had the lowest precision error for longitudinal evaluation of FLAIR changes. Semi-automated volumetry of glioblastoma was reliably performed by all groups of raters, even without neuroradiologic expertise. Interestingly, segmentations of tumor-associated FLAIR changes were more reliable than segmentations of contrast enhancement. In longitudinal evaluations, an experienced rater can detect progressive FLAIR changes of less than 15 % reliably in a quantitative way which could help to detect progressive disease earlier.
Fortin, Carole; Feldman, Debbie Ehrmann; Cheriet, Farida; Gravel, Denis; Gauthier, Frédérique; Labelle, Hubert
2012-03-01
To determine overall, test-retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis. A reliability study using two raters and two test sessions. Tertiary care paediatric centre. Seventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic. Based on the XY co-ordinates of natural reference points (e.g., eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test-retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters. In the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ=0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ≥0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus. Posture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time. Copyright © 2011 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Software for computerised analysis of cardiotocographic traces.
Romano, M; Bifulco, P; Ruffo, M; Improta, G; Clemente, F; Cesarelli, M
2016-02-01
Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians' evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Validation of Medical Tourism Service Quality Questionnaire (MTSQQ) for Iranian Hospitals.
Qolipour, Mohammad; Torabipour, Amin; Khiavi, Farzad Faraji; Malehi, Amal Saki
2017-03-01
Assessing service quality is one of the basic requirements to develop the medical tourism industry. There is no valid and reliable tool to measure service quality of medical tourism. This study aimed to determine the reliability and validity of a Persian version of medical tourism service quality questionnaire for Iranian hospitals. To validate the medical tourism service quality questionnaire (MTSQQ), a cross-sectional study was conducted on 250 Iraqi patients referred to hospitals in Ahvaz (Iran) from 2015. To design a questionnaire and determine its content validity, the Delphi Technique (3 rounds) with the participation of 20 medical tourism experts was used. Construct validity of the questionnaire was assessed through exploratory and confirmatory factor analysis. Reliability was assessed using Cronbach's alpha coefficient. Data were analyzed by Excel 2007, SPSS version18, and Lisrel l8.0 software. The content validity of the questionnaire with CVI=0.775 was confirmed. According to exploratory factor analysis, the MTSQQ included 31 items and 8 dimensions (tangibility, reliability, responsiveness, assurance, empathy, exchange and travel facilities, technical and infrastructure facilities and safety and security). Construct validity of the questionnaire was confirmed, based on the goodness of fit quantities of model (RMSEA=0.032, CFI= 0.98, GFI=0.88). Cronbach's alpha coefficient was 0.837 and 0.919 for expectation and perception questionnaire. The results of the study showed that the medical tourism SERVQUAL questionnaire with 31 items and 8 dimensions was a valid and reliable tool to measure service quality of medical tourism in Iranian hospitals.
Use of the iPhone for radiographic evaluation of hallux valgus.
Ege, Tolga; Kose, Ozkan; Koca, Kenan; Demiralp, Bahtiyar; Basbozkurt, Mustafa
2013-02-01
The purpose of this study was to compare the measurements made using a smartphone accelerometer and computerized measurements as a reference in a series of 32 hallux valgus patients. Two observers used an iPhone to measure the hallux valgus angle (HVA), intermetatarsal angle (IMA), and distal metatarsal articular angle (of anteroposterior foot radiographs in 32 patients with symptomatic hallux valgus on a computer screen. Digital angular measurements on the computer were set as the reference standard for analysis and comparison. The difference between computerized measurements and all iPhone measurements, and the difference between the first and second iPhone measurements for each observer were calculated. Inter- and intraobserver reliability of the smartphone measurement method was also tested. The variability of all measurements was similar for the iPhone and the computer-assisted techniques. The concordance between iPhone and computer-assisted angular measurements was excellent for the HVA, IMA, and DMAA. The maximum mean difference between the two techniques was 1.25 ± 1.02° for HVA, 0.92 ± 0.92° for IMA, and 1.10 ± 0.82° for DMAA. The interobserver reliability was excellent for HVA, IMA, and DMAA. The maximum mean difference between observers was 1.31 ± 0.89° for HVA, 0.90 ± 0.92° for IMA, and 0.78 ± 0.87° for DMAA. The intraobserver reliability was excellent for HVA, IMA, and DMAA. We conclude that the Hallux Angles software for the iPhone can be used for measurement of hallux valgus angles in clinical practice and even for research purposes. It is an accurate and reproducible method.
New Results in Software Model Checking and Analysis
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.
2010-01-01
This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.
Derakhshandeh, Zahra; Amini, Mitra; Kojuri, Javad; Dehbozorgian, Marziyeh
2018-01-01
Clinical reasoning is one of the most important skills in the process of training a medical student to become an efficient physician. Assessment of the reasoning skills in a medical school program is important to direct students' learning. One of the tests for measuring the clinical reasoning ability is Clinical Reasoning Problems (CRPs). The major aim of this study is to measure psychometric qualities of CRPs and define correlation between this test and routine MCQ in cardiology department of Shiraz medical school. This study was a descriptive study conducted on total cardiology residents of Shiraz Medical School. The study population consists of 40 residents in 2014. The routine CRPs and the MCQ tests was designed based on similar objectives and were carried out simultaneously. Reliability, item difficulty, item discrimination, and correlation between each item and the total score of CRPs were all measured by Excel and SPSS software for checking psycometeric CRPs test. Furthermore, we calculated the correlation between CRPs test and MCQ test. The mean differences of CRPs test score between residents' academic year [second, third and fourth year] were also evaluated by Analysis of variances test (One Way ANOVA) using SPSS software (version 20)(α=0.05). The mean and standard deviation of score in CRPs was 10.19 ±3.39 out of 20; in MCQ, it was 13.15±3.81 out of 20. Item difficulty was in the range of 0.27-0.72; item discrimination was 0.30-0.75 with question No.3 being the exception (that was 0.24). The correlation between each item and the total score of CRP was 0.26-0.87; the correlation between CRPs test and MCQ test was 0.68 (p<0.001). The reliability of the CRPs was 0.72 as calculated by using Cronbach's alpha. The mean score of CRPs was different among residents based on their academic year and this difference was statistically significant (p<0.001). The results of this present investigation revealed that CRPs could be reliable test for measuring clinical reasoning in residents. It can be included in cardiology residency assessment programs.
NASA Technical Reports Server (NTRS)
Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey
1993-01-01
Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.
Semi-automatic computerized approach to radiological quantification in rheumatoid arthritis
NASA Astrophysics Data System (ADS)
Steiner, Wolfgang; Schoeffmann, Sylvia; Prommegger, Andrea; Boegl, Karl; Klinger, Thomas; Peloschek, Philipp; Kainberger, Franz
2004-04-01
Rheumatoid Arthritis (RA) is a common systemic disease predominantly involving the joints. Precise diagnosis and follow-up therapy requires objective quantification. For this purpose, radiological analyses using standardized scoring systems are considered to be the most appropriate method. The aim of our study is to develop a semi-automatic image analysis software, especially applicable for scoring of joints in rheumatic disorders. The X-Ray RheumaCoach software delivers various scoring systems (Larsen-Score and Ratingen-Rau-Score) which can be applied by the scorer. In addition to the qualitative assessment of joints performed by the radiologist, a semi-automatic image analysis for joint detection and measurements of bone diameters and swollen tissue supports the image assessment process. More than 3000 radiographs from hands and feet of more than 200 RA patients were collected, analyzed, and statistically evaluated. Radiographs were quantified using conventional paper-based Larsen score and the X-Ray RheumaCoach software. The use of the software shortened the scoring time by about 25 percent and reduced the rate of erroneous scorings in all our studies. Compared to paper-based scoring methods, the X-Ray RheumaCoach software offers several advantages: (i) Structured data analysis and input that minimizes variance by standardization, (ii) faster and more precise calculation of sum scores and indices, (iii) permanent data storing and fast access to the software"s database, (iv) the possibility of cross-calculation to other scores, (v) semi-automatic assessment of images, and (vii) reliable documentation of results in the form of graphical printouts.
Spalding, Steven J; Kwoh, C Kent; Boudreau, Robert; Enama, Joseph; Lunich, Julie; Huber, Daniel; Denes, Louis; Hirsch, Raphael
2008-01-01
Introduction The assessment of joints with active arthritis is a core component of widely used outcome measures. However, substantial variability exists within and across examiners in assessment of these active joint counts. Swelling and temperature changes, two qualities estimated during active joint counts, are amenable to quantification using noncontact digital imaging technologies. We sought to explore the ability of three dimensional (3D) and thermal imaging to reliably measure joint shape and temperature. Methods A Minolta 910 Vivid non-contact 3D laser scanner and a Meditherm med2000 Pro Infrared camera were used to create digital representations of wrist and metacarpalphalangeal (MCP) joints. Specialized software generated 3 quantitative measures for each joint region: 1) Volume; 2) Surface Distribution Index (SDI), a marker of joint shape representing the standard deviation of vertical distances from points on the skin surface to a fixed reference plane; 3) Heat Distribution Index (HDI), representing the standard error of temperatures. Seven wrists and 6 MCP regions from 5 subjects with arthritis were used to develop and validate 3D image acquisition and processing techniques. HDI values from 18 wrist and 9 MCP regions were obtained from 17 patients with active arthritis and compared to data from 10 wrist and MCP regions from 5 controls. Standard deviation (SD), coefficient of variation (CV), and intraclass correlation coefficients (ICC) were calculated for each quantitative measure to establish their reliability. CVs for volume and SDI were <1.3% and ICCs were greater than 0.99. Results Thermal measures were less reliable than 3D measures. However, significant differences were observed between control and arthritis HDI values. Two case studies of arthritic joints demonstrated quantifiable changes in swelling and temperature corresponding with changes in symptoms and physical exam findings. Conclusion 3D and thermal imaging provide reliable measures of joint volume, shape, and thermal patterns. Further refinement may lead to the use of these technologies to improve the assessment of disease activity in arthritis. PMID:18215307
The comet moment as a measure of DNA damage in the comet assay.
Kent, C R; Eady, J J; Ross, G M; Steel, G G
1995-06-01
The development of rapid assays of radiation-induced DNA damage requires the definition of reliable parameters for the evaluation of dose-response relationships to compare with cellular endpoints. We have used the single-cell gel electrophoresis (SCGE) or 'comet' assay to measure DNA damage in individual cells after irradiation. Both the alkaline and neutral protocols were used. In both cases, DNA was stained with ethidium bromide and viewed using a fluorescence microscope at 516-560 nm. Images of comets were stored as 512 x 512 pixel images using OPTIMAS, an image analysis software package. Using this software we tested various parameters for measuring DNA damage. We have developed a method of analysis that rigorously conforms to the mathematical definition of the moment of inertia of a plane figure. This parameter does not require the identification of separate head and tail regions, but rather calculates a moment of the whole comet image. We have termed this parameter 'comet moment'. This method is simple to calculate and can be performed using most image analysis software packages that support macro facilities. In experiments on CHO-K1 cells, tail length was found to increase linearly with dose, but plateaued at higher doses. Comet moment also increased linearly with dose, but over a larger dose range than tail length and had no tendency to plateau.
Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles
NASA Technical Reports Server (NTRS)
Gamble, Ed
2012-01-01
Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses
Luo, Ping; Dai, Weidong; Yin, Peiyuan; Zeng, Zhongda; Kong, Hongwei; Zhou, Lina; Wang, Xiaolin; Chen, Shili; Lu, Xin; Xu, Guowang
2015-01-01
Pseudotargeted metabolic profiling is a novel strategy combining the advantages of both targeted and untargeted methods. The strategy obtains metabolites and their product ions from quadrupole time-of-flight (Q-TOF) MS by information-dependent acquisition (IDA) and then picks targeted ion pairs and measures them on a triple-quadrupole MS by multiple reaction monitoring (MRM). The picking of ion pairs from thousands of candidates is the most time-consuming step of the pseudotargeted strategy. Herein, a systematic and automated approach and software (MRM-Ion Pair Finder) were developed to acquire characteristic MRM ion pairs by precursor ions alignment, MS(2) spectrum extraction and reduction, characteristic product ion selection, and ion fusion. To test the reliability of the approach, a mixture of 15 metabolite standards was first analyzed; the representative ion pairs were correctly picked out. Then, pooled serum samples were further studied, and the results were confirmed by the manual selection. Finally, a comparison with a commercial peak alignment software was performed, and a good characteristic ion coverage of metabolites was obtained. As a proof of concept, the proposed approach was applied to a metabolomics study of liver cancer; 854 metabolite ion pairs were defined in the positive ion mode from serum. Our approach provides a high throughput method which is reliable to acquire MRM ion pairs for pseudotargeted metabolomics with improved metabolite coverage and facilitate more reliable biomarkers discoveries.
Models and metrics for software management and engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.
1988-01-01
This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.
Test-retest reliability of the multifocal photopic negative response.
Van Alstine, Anthony W; Viswanathan, Suresh
2017-02-01
To assess the test-retest reliability of the multifocal photopic negative response (mfPhNR) of normal human subjects. Multifocal electroretinograms were recorded from one eye of 61 healthy adult subjects on two separate days using a Visual Evoked Response Imaging System software version 4.3 (EDI, San Mateo, California). The visual stimulus delivered on a 75-Hz monitor consisted of seven equal-sized hexagons each subtending 12° of visual angle. The m-step exponent was 9, and the m-sequence was slowed to include at least 30 blank frames after each flash. Only the first slice of the first-order kernel was analyzed. The mfPhNR amplitude was measured at a fixed time in the trough from baseline (BT) as well as at the same fixed time in the trough from the preceding b-wave peak (PT). Additionally, we also analyzed BT normalized either to PT (BT/PT) or to the b-wave amplitude (BT/b-wave). The relative reliability of test-retest differences for each test location was estimated by the Wilcoxon matched-pair signed-rank test and intraclass correlation coefficients (ICC). Absolute test-retest reliability was estimated by Bland-Altman analysis. The test-retest amplitude differences for neither of the two measurement techniques were statistically significant as determined by Wilcoxon matched-pair signed-rank test. PT measurements showed greater ICC values than BT amplitude measurements for all test locations. For each measurement technique, the ICC value of the macular response was greater than that of the surrounding locations. The mean test-retest difference was close to zero for both techniques at each of the test locations, and while the coefficient of reliability (COR-1.96 times the standard deviation of the test-retest difference) was comparable for the two techniques at each test location when expressed in nanovolts, the %COR (COR normalized to the mean test and retest amplitudes) was superior for PT than BT measurements. The ICC and COR were comparable for the BT/PT and BT/b-wave ratios and were better than the ICC and COR for BT but worse than PT. mfPhNR amplitude measured at a fixed time in the trough from the preceding b-wave peak (PT) shows greater test-retest reliability when compared to amplitude measurement from baseline (BT) or BT amplitude normalized to either the PT or b-wave amplitudes.
Laser pulse coded signal frequency measuring device based on DSP and CPLD
NASA Astrophysics Data System (ADS)
Zhang, Hai-bo; Cao, Li-hua; Geng, Ai-hui; Li, Yan; Guo, Ru-hai; Wang, Ting-feng
2011-06-01
Laser pulse code is an anti-jamming measures used in semi-active laser guided weapons. On account of the laser-guided signals adopting pulse coding mode and the weak signal processing, it need complex calculations in the frequency measurement process according to the laser pulse code signal time correlation to meet the request in optoelectronic countermeasures in semi-active laser guided weapons. To ensure accurately completing frequency measurement in a short time, it needed to carry out self-related process with the pulse arrival time series composed of pulse arrival time, calculate the signal repetition period, and then identify the letter type to achieve signal decoding from determining the time value, number and rank number in a signal cycle by Using CPLD and DSP for signal processing chip, designing a laser-guided signal frequency measurement in the pulse frequency measurement device, improving the signal processing capability through the appropriate software algorithms. In this article, we introduced the principle of frequency measurement of the device, described the hardware components of the device, the system works and software, analyzed the impact of some system factors on the accuracy of the measurement. The experimental results indicated that this system improve the accuracy of the measurement under the premise of volume, real-time, anti-interference, low power of the laser pulse frequency measuring device. The practicality of the design, reliability has been demonstrated from the experimental point of view.
Szabo, Bence T; Aksoy, Seçil; Repassy, Gabor; Csomo, Krisztian; Dobo-Nagy, Csaba; Orhan, Kaan
2017-06-09
The aim of this study was to compare the paranasal sinus volumes obtained by manual and semiautomatic imaging software programs using both CT and CBCT imaging. 121 computed tomography (CT) and 119 cone beam computed tomography (CBCT) examinations were selected from the databases of the authors' institutes. The Digital Imaging and Communications in Medicine (DICOM) images were imported into 3-dimensonal imaging software, in which hand mode and semiautomatic tracing methods were used to measure the volumes of both maxillary sinuses and the sphenoid sinus. The determined volumetric means were compared to previously published averages. Isometric CBCT-based volume determination results were closer to the real volume conditions, whereas the non-isometric CT-based volume measurements defined coherently lower volumes. By comparing the 2 volume measurement modes, the values gained from hand mode were closer to the literature data. Furthermore, CBCT-based image measurement results corresponded to the known averages. Our results suggest that CBCT images provide reliable volumetric information that can be depended on for artificial organ construction, and which may aid the guidance of the operator prior to or during the intervention.
[Inferring landmark displacements from changes in cephalometric angles].
Xu, T; Baumrind, S
2001-07-01
To investigate the appropriateness of using changes in angular measurements to reflect the actually profile changes. The sample consists of 48 growing malocclusion patients, contained 24 Class I and 24 Class II subjects, treated by an experienced orthodontist using Edgewise technique. Landmark and superimpositional data were extracted from the previously prepared numerical database. Three pairs of angular and linear measures were computed by the Craniofacial Software Package. Although the associations between all three angular measures and their corresponding linear measures are statistically significant at the 0.001 level, the disagreement between these three pairs of measures are 10.4%, 22.9% and 37.5% respectively in this sample. The direction of displacement of anterior facial landmarks during growth and treatment cannot reliably be inferred merely from changes in cephalometric Angles.
NASA Technical Reports Server (NTRS)
Bean, E. E.; Bloomquist, C. E.
1972-01-01
A summary of the KSC program for investigating the reliability aspects of the ground support activities is presented. An analysis of unsatisfactory condition reports (RC), and the generation of reliability assessment of components based on the URC are discussed along with the design considerations for attaining reliable real time hardware/software configurations.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling
NASA Technical Reports Server (NTRS)
Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)
1997-01-01
Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, John C.
1987-01-01
Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Healthcare software assurance.
Cooper, Jason G; Pauley, Keith A
2006-01-01
Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA's software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted.
Cooper, Jason G.; Pauley, Keith A.
2006-01-01
Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
The NTID speech recognition test: NSRT(®).
Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A
2015-07-01
The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Software Carpentry and the Hydrological Sciences
NASA Astrophysics Data System (ADS)
Ahmadia, A. J.; Kees, C. E.; Farthing, M. W.
2013-12-01
Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice based on the following principles: 1. Write programs for people, not computers. 2. Automate repetitive tasks 3. Use the computer to record history 4. Make incremental changes 5. Use version control 6. Don't repeat yourself (or others) 7. Plan for mistakes 8. Optimize software only after it works 9. Document design and purpose, not mechanics 10. Collaborate We discuss how these best practices, arising from solid foundations in research and experience, have been shown to help improve scientist's productivity and the reliability of their software.
Velupillai, Sumithra; Dalianis, Hercules; Hassel, Martin; Nilsson, Gunnar H
2009-12-01
Electronic patient records (EPRs) contain a large amount of information written in free text. This information is considered very valuable for research but is also very sensitive since the free text parts may contain information that could reveal the identity of a patient. Therefore, methods for de-identifying EPRs are needed. The work presented here aims to perform a manual and automatic Protected Health Information (PHI)-annotation trial for EPRs written in Swedish. This study consists of two main parts: the initial creation of a manually PHI-annotated gold standard, and the porting and evaluation of an existing de-identification software written for American English to Swedish in a preliminary automatic de-identification trial. Results are measured with precision, recall and F-measure. This study reports fairly high Inter-Annotator Agreement (IAA) results on the manually created gold standard, especially for specific tags such as names. The average IAA over all tags was 0.65 F-measure (0.84 F-measure highest pairwise agreement). For name tags the average IAA was 0.80 F-measure (0.91 F-measure highest pairwise agreement). Porting a de-identification software written for American English to Swedish directly was unfortunately non-trivial, yielding poor results. Developing gold standard sets as well as automatic systems for de-identification tasks in Swedish is feasible. However, discussions and definitions on identifiable information is needed, as well as further developments both on the tag sets and the annotation guidelines, in order to get a reliable gold standard. A completely new de-identification software needs to be developed.
75 FR 81157 - Version One Regional Reliability Standard for Transmission Operations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
... processing software should be filed in native applications or print-to-PDF format and not in a scanned format..., Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). \\3\\ NERC designates the version number of a Reliability...
Software Design Improvements. Part 1; Software Benefits and Limitations
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
Development of a versatile user-friendly IBA experimental chamber
NASA Astrophysics Data System (ADS)
Kakuee, Omidreza; Fathollahi, Vahid; Lamehi-Rachti, Mohammad
2016-03-01
Reliable performance of the Ion Beam Analysis (IBA) techniques is based on the accurate geometry of the experimental setup, employment of the reliable nuclear data and implementation of dedicated analysis software for each of the IBA techniques. It has already been shown that geometrical imperfections lead to significant uncertainties in quantifications of IBA measurements. To minimize these uncertainties, a user-friendly experimental chamber with a heuristic sample positioning system for IBA analysis was recently developed in the Van de Graaff laboratory in Tehran. This system enhances IBA capabilities and in particular Nuclear Reaction Analysis (NRA) and Elastic Recoil Detection Analysis (ERDA) techniques. The newly developed sample manipulator provides the possibility of both controlling the tilt angle of the sample and analyzing samples with different thicknesses. Moreover, a reasonable number of samples can be loaded in the sample wheel. A comparison of the measured cross section data of the 16O(d,p1)17O reaction with the data reported in the literature confirms the performance and capability of the newly developed experimental chamber.
Buchner, Lena; Güntert, Peter
2015-02-03
Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.
Akram, Waqas; Hussein, Maryam S E; Ahmad, Sohail; Mamat, Mohd N; Ismail, Nahlah E
2015-10-01
There is no instrument which collectively assesses the knowledge, attitude and perceived practice of asthma among community pharmacists. Therefore, this study aimed to validate the instrument which measured the knowledge, attitude and perceived practice of asthma among community pharmacists by producing empirical evidence of validity and reliability of the items using Rasch model (Bond & Fox software®) for dichotomous and polytomous data. This baseline study recruited 33 community pharmacists from Penang, Malaysia. The results showed that all PTMEA Corr were in positive values, where an item was able to distinguish between the ability of respondents. Based on the MNSQ infit and outfit range (0.60-1.40), out of 55 items, 2 items from the instrument were suggested to be removed. The findings indicated that the instrument fitted with Rasch measurement model and showed the acceptable reliability values of 0.88 and 0.83 and 0.79 for knowledge, attitude and perceived practice respectively.
NASA Astrophysics Data System (ADS)
Abou-Elnour, Ali; Khaleeq, Hyder; Abou-Elnour, Ahmad
2016-04-01
In the present work, wireless sensor network and real-time controlling and monitoring system are integrated for efficient water quality monitoring for environmental and domestic applications. The proposed system has three main components (i) the sensor circuits, (ii) the wireless communication system, and (iii) the monitoring and controlling unit. LabView software has been used in the implementation of the monitoring and controlling system. On the other hand, ZigBee and myRIO wireless modules have been used to implement the wireless system. The water quality parameters are accurately measured by the present computer based monitoring system and the measurement results are instantaneously transmitted and published with minimum infrastructure costs and maximum flexibility in term of distance or location. The mobility and durability of the proposed system are further enhanced by fully powering via a photovoltaic system. The reliability and effectiveness of the system are evaluated under realistic operating conditions.
Testing of Hand-Held Mine Detection Systems
2015-01-08
ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI
Reliability program requirements for aeronautical and space system contractors
NASA Technical Reports Server (NTRS)
1987-01-01
General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.
Zaytsev, Yury V; Morrison, Abigail
2012-01-01
High quality neuroscience research requires accurate, reliable and well maintained neuroinformatics applications. As software projects become larger, offering more functionality and developing a denser web of interdependence between their component parts, we need more sophisticated methods to manage their complexity. If complexity is allowed to get out of hand, either the quality of the software or the speed of development suffer, and in many cases both. To address this issue, here we develop a scalable, low-cost and open source solution for continuous integration (CI), a technique which ensures the quality of changes to the code base during the development procedure, rather than relying on a pre-release integration phase. We demonstrate that a CI-based workflow, due to rapid feedback about code integration problems and tracking of code health measures, enabled substantial increases in productivity for a major neuroinformatics project and additional benefits for three further projects. Beyond the scope of the current study, we identify multiple areas in which CI can be employed to further increase the quality of neuroinformatics projects by improving development practices and incorporating appropriate development tools. Finally, we discuss what measures can be taken to lower the barrier for developers of neuroinformatics applications to adopt this useful technique.
Choi, D J; Park, H
2001-11-01
For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.
Zaytsev, Yury V.; Morrison, Abigail
2013-01-01
High quality neuroscience research requires accurate, reliable and well maintained neuroinformatics applications. As software projects become larger, offering more functionality and developing a denser web of interdependence between their component parts, we need more sophisticated methods to manage their complexity. If complexity is allowed to get out of hand, either the quality of the software or the speed of development suffer, and in many cases both. To address this issue, here we develop a scalable, low-cost and open source solution for continuous integration (CI), a technique which ensures the quality of changes to the code base during the development procedure, rather than relying on a pre-release integration phase. We demonstrate that a CI-based workflow, due to rapid feedback about code integration problems and tracking of code health measures, enabled substantial increases in productivity for a major neuroinformatics project and additional benefits for three further projects. Beyond the scope of the current study, we identify multiple areas in which CI can be employed to further increase the quality of neuroinformatics projects by improving development practices and incorporating appropriate development tools. Finally, we discuss what measures can be taken to lower the barrier for developers of neuroinformatics applications to adopt this useful technique. PMID:23316158
Kortüm, K; Reznicek, L; Leicht, S; Ulbig, M; Wolf, A
2013-07-01
The importance and complexity of clinical trials is continuously increasing, especially in innovative specialties like ophthalmology. Therefore an efficient clinical trial site organisational structure is essential. In modern internet times, this can be accomplished by web-based applications. In total, 3 software applications (Vibe on Prem, Sharepoint and open source software) were evaluated in a clinical trial site in ophthalmology. Assessment criteria were set; they were: reliability, easiness of administration, usability, scheduling, task list, knowledge management, operating costs and worldwide availability. Vibe on Prem customised by the local university met the assessment criteria best. Other applications were not as strong. By introducing a web-based application for administrating and organising an ophthalmological trial site, studies can be conducted in a more efficient and reliable manner. Georg Thieme Verlag KG Stuttgart · New York.
Computer assisted blast design and assessment tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, A.R.; Kleine, T.H.; Forsyth, W.W.
1995-12-31
In general the software required by a blast designer includes tools that graphically present blast designs (surface and underground), can analyze a design or predict its result, and can assess blasting results. As computers develop and computer literacy continues to rise the development of and use of such tools will spread. An example of the tools that are becoming available includes: Automatic blast pattern generation and underground ring design; blast design evaluation in terms of explosive distribution and detonation simulation; fragmentation prediction; blast vibration prediction and minimization; blast monitoring for assessment of dynamic performance; vibration measurement, display and signal processing;more » evaluation of blast results in terms of fragmentation; and risk and reliability based blast assessment. The authors have identified a set of criteria that are essential in choosing appropriate software blasting tools.« less
Electronic Health Record for Intensive Care based on Usual Windows Based Software.
Reper, Arnaud; Reper, Pascal
2015-08-01
In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed an EHR based on usual software and components. The software was designed as a client-server architecture running on the Windows operating system and powered by the access data base system. The client software was developed using Visual Basic interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. Since his deployment in September 2004, the EHR was used to care more than five thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of basic functionalities communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. The developed system based on usual software components was able to respond to the medical needs of the local ICU environment. The use of Windows for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system.
Challenges in Managing Trustworthy Large-scale Digital Science
NASA Astrophysics Data System (ADS)
Evans, B. J. K.
2017-12-01
The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.
SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL
2014-06-01
Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.
2015-12-01
The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.
NASA Astrophysics Data System (ADS)
Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.
2017-02-01
In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.
Tools Ensure Reliability of Critical Software
NASA Technical Reports Server (NTRS)
2012-01-01
In November 2006, after attempting to make a routine maneuver, NASA's Mars Global Surveyor (MGS) reported unexpected errors. The onboard software switched to backup resources, and a 2-day lapse in communication took place between the spacecraft and Earth. When a signal was finally received, it indicated that MGS had entered safe mode, a state of restricted activity in which the computer awaits instructions from Earth. After more than 9 years of successful operation gathering data and snapping pictures of Mars to characterize the planet's land and weather communication between MGS and Earth suddenly stopped. Months later, a report from NASA's internal review board found the spacecraft's battery failed due to an unfortunate sequence of events. Updates to the spacecraft's software, which had taken place months earlier, were written to the wrong memory address in the spacecraft's computer. In short, the mission ended because of a software defect. Over the last decade, spacecraft have become increasingly reliant on software to carry out mission operations. In fact, the next mission to Mars, the Mars Science Laboratory, will rely on more software than all earlier missions to Mars combined. According to Gerard Holzmann, manager at the Laboratory for Reliable Software (LaRS) at NASA's Jet Propulsion Laboratory (JPL), even the fault protection systems on a spacecraft are mostly software-based. For reasons like these, well-functioning software is critical for NASA. In the same year as the failure of MGS, Holzmann presented a new approach to critical software development to help reduce risk and provide consistency. He proposed The Power of 10: Rules for Developing Safety-Critical Code, which is a small set of rules that can easily be remembered, clearly relate to risk, and allow compliance to be verified. The reaction at JPL was positive, and developers in the private sector embraced Holzmann's ideas.
Data collection and evaluation for experimental computer science research
NASA Technical Reports Server (NTRS)
Zelkowitz, Marvin V.
1983-01-01
The Software Engineering Laboratory was monitoring software development at NASA Goddard Space Flight Center since 1976. The data collection activities of the Laboratory and some of the difficulties of obtaining reliable data are described. In addition, the application of this data collection process to a current prototyping experiment is reviewed.
[The research in a foot pressure measuring system based on LabVIEW].
Li, Wei; Qiu, Hong; Xu, Jiang; He, Jiping
2011-01-01
This paper presents a system of foot pressure measuring system based on LabVIEW. The designs of hardware and software system are figured out. LabVIEW is used to design the application interface for displaying plantar pressure. The system can realize the plantar pressure data acquisition, data storage, waveform display, and waveform playback. It was also shown that the testing results of the system were in line with the changing trend of normal gait, which conformed to human system engineering theory. It leads to the demonstration of system reliability. The system gives vivid and visual results, and provides a new method of how to measure foot-pressure and some references for the design of Insole System.
[Construction and application of an onboard absorption analyzer device for CDOM].
Lin, Jun-Fang; Sun, Zhao-Hua; Cao, Wen-Xi; Hu, Shui-Bo; Xu, Zhan-Tang
2013-04-01
Colored dissolved organic matter (CDOM) plays an important role in marine ecosystems. In order to solve the current problems in measurement of CDOM absorption, an automated onboard analyzer based on liquid core waveguides (Teflon AF LWCC/LCW) was constructed. This analyzer has remarkable characteristics including adjusted optical pathlength, wide measurement range, and high sensitivity. The model of filtration and injection can implement the function of automated filtration, sample injection, and LWCC cleaning. The LabVIEW software platform can efficiently control the running state of the analyzer and acquire real time data including light absorption spectra, GPS data, and CTW data. By the comparison experiments and shipboard measurements, it was proved that the analyzer was reliable and robust.
Condition assessment of nonlinear processes
Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.
2002-01-01
There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.
Vestibular function assessment using the NIH Toolbox
Schubert, Michael C.; Whitney, Susan L.; Roberts, Dale; Redfern, Mark S.; Musolino, Mark C.; Roche, Jennica L.; Steed, Daniel P.; Corbin, Bree; Lin, Chia-Cheng; Marchetti, Greg F.; Beaumont, Jennifer; Carey, John P.; Shepard, Neil P.; Jacobson, Gary P.; Wrisley, Diane M.; Hoffman, Howard J.; Furman, Gabriel; Slotkin, Jerry
2013-01-01
Objective: Development of an easy to administer, low-cost test of vestibular function. Methods: Members of the NIH Toolbox Sensory Domain Vestibular, Vision, and Motor subdomain teams collaborated to identify 2 tests: 1) Dynamic Visual Acuity (DVA), and 2) the Balance Accelerometry Measure (BAM). Extensive work was completed to identify and develop appropriate software and hardware. More than 300 subjects between the ages of 3 and 85 years, with and without vestibular dysfunction, were recruited and tested. Currently accepted gold standard measures of static visual acuity, vestibular function, dynamic visual acuity, and balance were performed to determine validity. Repeat testing was performed to examine reliability. Results: The DVA and BAM tests are affordable and appropriate for use for individuals 3 through 85 years of age. The DVA had fair to good reliability (0.41–0.94) and sensitivity and specificity (50%–73%), depending on age and optotype chosen. The BAM test was moderately correlated with center of pressure (r = 0.42–0.48) and dynamic posturography (r = −0.48), depending on age and test condition. Both tests differentiated those with and without vestibular impairment and the young from the old. Each test was reliable. Conclusion: The newly created DVA test provides a valid measure of visual acuity with the head still and moving quickly. The novel BAM is a valid measure of balance. Both tests are sensitive to age-related changes and are able to screen for impairment of the vestibular system. PMID:23479540
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Judge, R. W.
1981-01-01
A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.
NASA Technical Reports Server (NTRS)
Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.
1992-01-01
Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.
Automated verification of flight software. User's manual
NASA Technical Reports Server (NTRS)
Saib, S. H.
1982-01-01
(Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.
2003-03-01
27 2.8.5 Marginal Analysis Method...Figure 11 Improved Configuration of Figure 10; Increases Basic System Reliability..... 26 Figure 12 Example of marginal analysis ...View of Main Book of Software ............................................................... 51 Figure 20 The View of Data Worksheet
Third Molars on the Internet: A Guide for Assessing Information Quality and Readability.
Hanna, Kamal; Brennan, David; Sambrook, Paul; Armfield, Jason
2015-10-06
Directing patients suffering from third molars (TMs) problems to high-quality online information is not only medically important, but also could enable better engagement in shared decision making. This study aimed to develop a scale that measures the scientific information quality (SIQ) for online information concerning wisdom tooth problems and to conduct a quality evaluation for online TMs resources. In addition, the study evaluated whether a specific piece of readability software (Readability Studio Professional 2012) might be reliable in measuring information comprehension, and explored predictors for the SIQ Scale. A cross-sectional sample of websites was retrieved using certain keywords and phrases such as "impacted wisdom tooth problems" using 3 popular search engines. The retrieved websites (n=150) were filtered. The retained 50 websites were evaluated to assess their characteristics, usability, accessibility, trust, readability, SIQ, and their credibility using DISCERN and Health on the Net Code (HoNCode). Websites' mean scale scores varied significantly across website affiliation groups such as governmental, commercial, and treatment provider bodies. The SIQ Scale had a good internal consistency (alpha=.85) and was significantly correlated with DISCERN (r=.82, P<.01) and HoNCode (r=.38, P<.01). Less than 25% of websites had SIQ scores above 75%. The mean readability grade (10.3, SD 1.9) was above the recommended level, and was significantly correlated with the Scientific Information Comprehension Scale (r=.45. P<.01), which provides evidence for convergent validity. Website affiliation and DISCERN were significantly associated with SIQ (P<.01) and explained 76% of the SIQ variance. The developed SIQ Scale was found to demonstrate reliability and initial validity. Website affiliation, DISCERN, and HoNCode were significant predictors for the quality of scientific information. The Readability Studio software estimates were associated with scientific information comprehensiveness measures.
Paiva, Carlos Eduardo; Siquelli, Felipe Augusto Ferreira; Zaia, Gabriela Rossi; de Andrade, Diocésio Alves Pinto; Borges, Marcos Aristoteles; Jácome, Alexandre A; Giroldo, Gisele Augusta Sousa Nascimento; Santos, Henrique Amorim; Hahn, Elizabeth A; Uemura, Gilberto; Paiva, Bianca Sakamoto Ribeiro
2016-01-01
To develop and validate a new multimedia instrument to measure health-related quality of life (HRQOL) in Portuguese-speaking patients with cancer. A mixed-methods study conducted in a large Brazilian Cancer Hospital. The instrument was developed along the following sequential phases: identification of HRQOL issues through qualitative content analysis of individual interviews, evaluation of the most important items according to the patients, review of the literature, evaluation by an expert committee, and pretesting. In sequence, an exploratory factor analysis was conducted (pilot testing, n = 149) to reduce the number of items and to define domains and scores. The psychometric properties of the IQualiV-OG-21 were measured in a large multicentre Brazilian study (n = 323). A software containing multimedia resources were developed to facilitate self-administration of IQualiV-OG-21; its feasibility and patients' preferences ("paper and pencil" vs. software) were further tested (n = 54). An exploratory factor analysis reduced the 30-item instrument to 21 items. The IQualiV-OG-21 was divided into 6 domains: emotional, physical, existential, interpersonal relationships, functional and financial. The multicentre study confirmed that it was valid and reliable. The electronic multimedia instrument was easy to complete and acceptable to patients. Regarding preferences, 61.1 % of them preferred the electronic format in comparison with the paper and pencil format. The IQualiV-OG-21 is a new valid and reliable multimedia HRQOL instrument that is well-understood, even by patients with low literacy skills, and can be answered quickly. It is a useful new tool that can be translated and tested in other cultures and languages.
1981-04-30
However, SREM was not designed to harmonize these kinds of problems. Rather, it is a tool to investigate the logic of the processing specified in the... design . Supoorting programs were also conducted to perform basic research into such areas as software reliability, static and dynamic validation techniques...development. 0 Maintain requirements development independent of the target machine and the eventual software design . 0. Allow for easy response to
An Open Avionics and Software Architecture to Support Future NASA Exploration Missions
NASA Technical Reports Server (NTRS)
Schlesinger, Adam
2017-01-01
The presentation describes an avionics and software architecture that has been developed through NASAs Advanced Exploration Systems (AES) division. The architecture is open-source, highly reliable with fault tolerance, and utilizes standard capabilities and interfaces, which are scalable and customizable to support future exploration missions. Specific focus areas of discussion will include command and data handling, software, human interfaces, communication and wireless systems, and systems engineering and integration.
Component Verification and Certification in NASA Missions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Penix, John; Norvig, Peter (Technical Monitor)
2001-01-01
Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.
QCScreen: a software tool for data quality control in LC-HRMS based metabolomics.
Simader, Alexandra Maria; Kluger, Bernhard; Neumann, Nora Katharina Nicole; Bueschl, Christoph; Lemmens, Marc; Lirk, Gerald; Krska, Rudolf; Schuhmacher, Rainer
2015-10-24
Metabolomics experiments often comprise large numbers of biological samples resulting in huge amounts of data. This data needs to be inspected for plausibility before data evaluation to detect putative sources of error e.g. retention time or mass accuracy shifts. Especially in liquid chromatography-high resolution mass spectrometry (LC-HRMS) based metabolomics research, proper quality control checks (e.g. for precision, signal drifts or offsets) are crucial prerequisites to achieve reliable and comparable results within and across experimental measurement sequences. Software tools can support this process. The software tool QCScreen was developed to offer a quick and easy data quality check of LC-HRMS derived data. It allows a flexible investigation and comparison of basic quality-related parameters within user-defined target features and the possibility to automatically evaluate multiple sample types within or across different measurement sequences in a short time. It offers a user-friendly interface that allows an easy selection of processing steps and parameter settings. The generated results include a coloured overview plot of data quality across all analysed samples and targets and, in addition, detailed illustrations of the stability and precision of the chromatographic separation, the mass accuracy and the detector sensitivity. The use of QCScreen is demonstrated with experimental data from metabolomics experiments using selected standard compounds in pure solvent. The application of the software identified problematic features, samples and analytical parameters and suggested which data files or compounds required closer manual inspection. QCScreen is an open source software tool which provides a useful basis for assessing the suitability of LC-HRMS data prior to time consuming, detailed data processing and subsequent statistical analysis. It accepts the generic mzXML format and thus can be used with many different LC-HRMS platforms to process both multiple quality control sample types as well as experimental samples in one or more measurement sequences.
Network Design in Close-Range Photogrammetry with Short Baseline Images
NASA Astrophysics Data System (ADS)
Barazzetti, L.
2017-08-01
The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for "old fashioned" photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.
[An experimental research on the fabrication of the fused porcelain to CAD/CAM molar crown].
Dai, Ning; Zhou, Yongyao; Liao, Wenhe; Yu, Qing; An, Tao; Jiao, Yiqun
2007-02-01
This paper introduced the fabrication process of the fused porcelain to molar crown with CAD/CAM technology. Firstly, preparation teeth data was retrieved by the 3D-optical measuring system. Then, we have reconstructed the inner surface designed the outer surface shape with the computer aided design software. Finally, the mini high-speed NC milling machine was used to produce the fused porcelain to CAD/CAM molar crown. The result has proved that the fabrication process is reliable and efficient. The dental restoration quality is steady and precise.
Validation of Medical Tourism Service Quality Questionnaire (MTSQQ) for Iranian Hospitals
Qolipour, Mohammad; Torabipour, Amin; Khiavi, Farzad Faraji; Malehi, Amal Saki
2017-01-01
Introduction Assessing service quality is one of the basic requirements to develop the medical tourism industry. There is no valid and reliable tool to measure service quality of medical tourism. This study aimed to determine the reliability and validity of a Persian version of medical tourism service quality questionnaire for Iranian hospitals. Methods To validate the medical tourism service quality questionnaire (MTSQQ), a cross-sectional study was conducted on 250 Iraqi patients referred to hospitals in Ahvaz (Iran) from 2015. To design a questionnaire and determine its content validity, the Delphi Technique (3 rounds) with the participation of 20 medical tourism experts was used. Construct validity of the questionnaire was assessed through exploratory and confirmatory factor analysis. Reliability was assessed using Cronbach’s alpha coefficient. Data were analyzed by Excel 2007, SPSS version18, and Lisrel l8.0 software. Results The content validity of the questionnaire with CVI=0.775 was confirmed. According to exploratory factor analysis, the MTSQQ included 31 items and 8 dimensions (tangibility, reliability, responsiveness, assurance, empathy, exchange and travel facilities, technical and infrastructure facilities and safety and security). Construct validity of the questionnaire was confirmed, based on the goodness of fit quantities of model (RMSEA=0.032, CFI= 0.98, GFI=0.88). Cronbach’s alpha coefficient was 0.837 and 0.919 for expectation and perception questionnaire. Conclusion The results of the study showed that the medical tourism SERVQUAL questionnaire with 31 items and 8 dimensions was a valid and reliable tool to measure service quality of medical tourism in Iranian hospitals. PMID:28461863
Almaqrami, Bushra-Sufyan; Alhammadi, Maged-Sultan
2018-01-01
Background The objective of this study was to analyse three dimensionally the reliability and correlation of angular and linear measurements in assessment of anteroposterior skeletal discrepancy. Material and Methods In this retrospective cross sectional study, a sample of 213 subjects were three-dimensionally analysed from cone-beam computed tomography scans. The sample was divided according to three dimensional measurement of anteroposterior relation (ANB angle) into three groups (skeletal Class I, Class II and Class III). The anterior-posterior cephalometric indicators were measured on volumetric images using Anatomage software (InVivo5.2). These measurements included three angular and seven linear measurements. Cross tabulations were performed to correlate the ANB angle with each method. Intra-class Correlation Coefficient (ICC) test was applied for the difference between the two reliability measurements. P value of < 0.05 was considered significant. Results There was a statistically significant (P<0.05) agreement between all methods used with variability in assessment of different anteroposterior relations. The highest correlation was between ANB and DSOJ (0.913), strong correlation with AB/FH, AB/SN/, MM bisector, AB/PP, Wits appraisal (0.896, 0.890, 0.878, 0.867,and 0.858, respectively), moderate with AD/SN and Beta angle (0.787 and 0.760), and weak correlation with corrected ANB angle (0.550). Conclusions Conjunctive usage of ANB angle with DSOJ, AB/FH, AB/SN/, MM bisector, AB/PP and Wits appraisal in 3D cephalometric analysis provide a more reliable and valid indicator of the skeletal anteroposterior relationship. Clinical relevance: Most of orthodontic literature depends on single method (ANB) with its drawbacks in assessment of skeletal discrepancy which is a cardinal factors for proper treatment planning, this study assessed three dimensionally the degree of correlation between all available methods to make clinical judgement more accurate based on more than one method of assessment. Key words:Anteroposterior relationships, ANB angle, Three-dimension, CBCT. PMID:29750096
Characterisation of imperial college reactor centre legacy waste using gamma-ray spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuhaimi, Alif Imran Mohd
Waste characterisation is a principal component in waste management strategy. The characterisation includes identification of chemical, physical and radiochemical parameters of radioactive waste. Failure to determine specific waste properties may result in sentencing waste packages which are not compliant with the regulation of long term storage or disposal. This project involved measurement of intensity and energy of gamma photons which may be emitted by radioactive waste generated during decommissioning of Imperial College Reactor Centre (ICRC). The measurement will use High Purity Germanium (HPGe) as Gamma-ray detector and ISOTOPIC-32 V4.1 as analyser. In order to ensure the measurements provide reliable results,more » two quality control (QC) measurements using difference matrices have been conducted. The results from QC measurements were used to determine the accuracy of the ISOTOPIC software.« less
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.