Sample records for methods provide reliable

  1. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  2. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  3. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  4. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  5. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  6. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  7. Test Assembly Implications for Providing Reliable and Valid Subscores

    ERIC Educational Resources Information Center

    Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J.

    2017-01-01

    This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…

  8. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William

    2009-01-01

    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  9. Advancing methods for reliably assessing motivational interviewing fidelity using the Motivational Interviewing Skills Code

    PubMed Central

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.

    2014-01-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192

  10. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    PubMed

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  12. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  13. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  14. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    NASA Astrophysics Data System (ADS)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  15. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  16. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  17. Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2010-01-01

    A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…

  18. Overview of Probabilistic Methods for SAE G-11 Meeting for Reliability and Uncertainty Quantification for DoD TACOM Initiative with SAE G-11 Division

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting during October 6-8 at the Best Western Sterling Inn, Sterling Heights (Detroit), Michigan is co-sponsored by US Army Tank-automotive & Armaments Command (TACOM). The meeting will provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11's Probabilistic Methods Committee is to "enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development."

  19. Durability reliability analysis for corroding concrete structures under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  20. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    PubMed Central

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  1. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  2. Mechanical system reliability for long life space systems

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1994-01-01

    The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.

  3. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  4. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  5. The Use of Invariance and Bootstrap Procedures as a Method to Establish the Reliability of Research Results.

    ERIC Educational Resources Information Center

    Sandler, Andrew B.

    Statistical significance is misused in educational and psychological research when it is applied as a method to establish the reliability of research results. Other techniques have been developed which can be correctly utilized to establish the generalizability of findings. Methods that do provide such estimates are known as invariance or…

  6. Reliable volumetry of the cervical spinal cord in MS patient follow-up data with cord image analyzer (Cordial).

    PubMed

    Amann, Michael; Pezold, Simon; Naegelin, Yvonne; Fundana, Ketut; Andělová, Michaela; Weier, Katrin; Stippich, Christoph; Kappos, Ludwig; Radue, Ernst-Wilhelm; Cattin, Philippe; Sprenger, Till

    2016-07-01

    Spinal cord (SC) atrophy is an important contributor to the development of disability in many neurological disorders including multiple sclerosis (MS). To assess the spinal cord atrophy in clinical trials and clinical practice, largely automated methods are needed due to the sheer amount of data. Moreover, using these methods in longitudinal trials requires them to deliver highly reliable measurements, enabling comparisons of multiple data sets of the same subject over time. We present a method for SC volumetry using 3D MRI data providing volume measurements for SC sections of fixed length and location. The segmentation combines a continuous max flow approach with SC surface reconstruction that locates the SC boundary based on image voxel intensities. Two cutting planes perpendicular to the SC centerline are determined based on predefined distances to an anatomical landmark, and the cervical SC volume (CSCV) is then calculated in-between these boundaries. The development of the method focused on its application in MRI follow-up studies; the method provides a high scan-rescan reliability, which was tested on healthy subject data. Scan-rescan reliability coefficients of variation (COV) were below 1 %, intra- and interrater COV were even lower (0.1-0.2 %). To show the applicability in longitudinal trials, 3-year follow-up data of 48 patients with a progressive course of MS were assessed. In this cohort, CSCV loss was the only significant predictor of disability progression (p = 0.02). We are, therefore, confident that our method provides a reliable tool for SC volumetry in longitudinal clinical trials.

  7. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A criterion for establishing life limits. [for Space Shuttle Main Engine service

    NASA Technical Reports Server (NTRS)

    Skopp, G. H.; Porter, A. A.

    1990-01-01

    The development of a rigorous statistical method that would utilize hardware-demonstrated reliability to evaluate hardware capability and provide ground rules for safe flight margin is discussed. A statistical-based method using the Weibull/Weibayes cumulative distribution function is described. Its advantages and inadequacies are pointed out. Another, more advanced procedure, Single Flight Reliability (SFR), determines a life limit which ensures that the reliability of any single flight is never less than a stipulated value at a stipulated confidence level. Application of the SFR method is illustrated.

  9. Reliability and Availability Evaluation Program Manual.

    DTIC Science & Technology

    1982-11-01

    research and development. The manual’s purpose was to provide a practical method for making reliability measurements, measurements directly related to... Research , Development, Test and Evaluation. RMA Reliability, Maintainability and Availability. R&R Repair and Refurbishment, Repair and Replacement, etc...length. phenomena such as mechanical wear and A number of researchers in the reliability chemical deterioration. Maintenance should field 14-pages 402

  10. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  11. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  12. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE PAGES

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    2017-12-20

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  13. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  14. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  15. Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.

    PubMed

    Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L

    2018-02-01

    Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.

  16. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    NASA Astrophysics Data System (ADS)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  17. Overview of Future of Probabilistic Methods and RMSL Technology and the Probabilistic Methods Education Initiative for the US Army at the SAE G-11 Meeting

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.

  18. Transit service reliability

    DOT National Transportation Integrated Search

    1978-12-01

    This report presents a comprehensive overview of the subject of transit service reliability and provides a framework for a program of demonstrations and research studies which could be carried out under the Service and Methods Demonstration program. ...

  19. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  20. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  1. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  2. The reliability of nonlinear least-squares algorithm for data analysis of neural response activity during sinusoidal rotational stimulation in semicircular canal neurons.

    PubMed

    Ren, Pengyu; Li, Bowen; Dong, Shiyao; Chen, Lin; Zhang, Yuelin

    2018-01-01

    Although many mathematical methods were used to analyze the neural activity under sinusoidal stimulation within linear response range in vestibular system, the reliabilities of these methods are still not reported, especially in nonlinear response range. Here we chose nonlinear least-squares algorithm (NLSA) with sinusoidal model to analyze the neural response of semicircular canal neurons (SCNs) during sinusoidal rotational stimulation (SRS) over a nonlinear response range. Our aim was to acquire a reliable mathematical method for data analysis under SRS in vestibular system. Our data indicated that the reliability of this method in an entire SCNs population was quite satisfactory. However, the reliability was strongly negatively depended on the neural discharge regularity. In addition, stimulation parameters were the vital impact factors influencing the reliability. The frequency had a significant negative effect but the amplitude had a conspicuous positive effect on the reliability. Thus, NLSA with sinusoidal model resulted a reliable mathematical tool for data analysis of neural response activity under SRS in vestibular system and more suitable for those under the stimulation with low frequency but high amplitude, suggesting that this method can be used in nonlinear response range. This method broke out of the restriction of neural activity analysis under nonlinear response range and provided a solid foundation for future study in nonlinear response range in vestibular system.

  3. The reliability of nonlinear least-squares algorithm for data analysis of neural response activity during sinusoidal rotational stimulation in semicircular canal neurons

    PubMed Central

    Li, Bowen; Dong, Shiyao; Chen, Lin; Zhang, Yuelin

    2018-01-01

    Although many mathematical methods were used to analyze the neural activity under sinusoidal stimulation within linear response range in vestibular system, the reliabilities of these methods are still not reported, especially in nonlinear response range. Here we chose nonlinear least-squares algorithm (NLSA) with sinusoidal model to analyze the neural response of semicircular canal neurons (SCNs) during sinusoidal rotational stimulation (SRS) over a nonlinear response range. Our aim was to acquire a reliable mathematical method for data analysis under SRS in vestibular system. Our data indicated that the reliability of this method in an entire SCNs population was quite satisfactory. However, the reliability was strongly negatively depended on the neural discharge regularity. In addition, stimulation parameters were the vital impact factors influencing the reliability. The frequency had a significant negative effect but the amplitude had a conspicuous positive effect on the reliability. Thus, NLSA with sinusoidal model resulted a reliable mathematical tool for data analysis of neural response activity under SRS in vestibular system and more suitable for those under the stimulation with low frequency but high amplitude, suggesting that this method can be used in nonlinear response range. This method broke out of the restriction of neural activity analysis under nonlinear response range and provided a solid foundation for future study in nonlinear response range in vestibular system. PMID:29304173

  4. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  5. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  6. Use of Very Weak Radiation Sources to Determine Aircraft Runway Position

    NASA Technical Reports Server (NTRS)

    Drinkwater, Fred J., III; Kibort, Bernard R.

    1965-01-01

    Various methods of providing runway information in the cockpit during the take-off and landing roll have been proposed. The most reliable method has been to use runway distance markers when visible. Flight tests were used to evaluate the feasibility of using weak radio-active sources to trigger a runway distance counter in the cockpit. The results of these tests indicate that a weak radioactive source would provide a reliable signal by which this indicator could be operated.

  7. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals

    PubMed Central

    Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M.; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M.

    2017-01-01

    There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3–7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers. PMID:29113044

  8. Survey of Software Assurance Techniques for Highly Reliable Systems

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy

    2004-01-01

    This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.

  9. A Method to Increase Drivers' Trust in Collision Warning Systems Based on Reliability Information of Sensor

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shigeyoshi; Wada, Takahiro; Akita, Tokihiko; Doi, Shun'ichi

    Driver's workload tends to be increased during driving under complicated traffic environments like a lane change. In such cases, rear collision warning is effective for reduction of cognitive workload. On the other hand, it is pointed out that false alarm or missing alarm caused by sensor errors leads to decrease of driver' s trust in the warning system and it can result in low efficiency of the system. Suppose that reliability information of the sensor is provided in real-time. In this paper, we propose a new warning method to increase driver' s trust in the system even with low sensor reliability utilizing the sensor reliability information. The effectiveness of the warning methods is shown by driving simulator experiments.

  10. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  11. Techniques for control of long-term reliability of complex integrated circuits. I - Reliability assurance by test vehicle qualification.

    NASA Technical Reports Server (NTRS)

    Van Vonno, N. W.

    1972-01-01

    Development of an alternate approach to the conventional methods of reliability assurance for large-scale integrated circuits. The product treated is a large-scale T squared L array designed for space applications. The concept used is that of qualification of product by evaluation of the basic processing used in fabricating the product, providing an insight into its potential reliability. Test vehicles are described which enable evaluation of device characteristics, surface condition, and various parameters of the two-level metallization system used. Evaluation of these test vehicles is performed on a lot qualification basis, with the lot consisting of one wafer. Assembled test vehicles are evaluated by high temperature stress at 300 C for short time durations. Stressing at these temperatures provides a rapid method of evaluation and permits a go/no go decision to be made on the wafer lot in a timely fashion.

  12. One-year test-retest reliability of intrinsic connectivity network fMRI in older adults

    PubMed Central

    Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.

    2014-01-01

    “Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491

  13. Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2009-01-01

    A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…

  14. Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,

  15. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  16. A study on the real-time reliability of on-board equipment of train control system

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  17. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  18. Constructing the 'Best' Reliability Data for the Job - Developing Generic Reliability Data from Alternative Sources Early in a Product's Development Phase

    NASA Technical Reports Server (NTRS)

    Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.

    2016-01-01

    Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.

  19. Test Reliability at the Individual Level

    PubMed Central

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  20. Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth

    2016-01-01

    If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.

  1. Empirical methods for assessing meaningful neuropsychological change following epilepsy surgery.

    PubMed

    Sawrie, S M; Chelune, G J; Naugle, R I; Lüders, H O

    1996-11-01

    Traditional methods for assessing the neurocognitive effects of epilepsy surgery are confounded by practice effects, test-retest reliability issues, and regression to the mean. This study employs 2 methods for assessing individual change that allow direct comparison of changes across both individuals and test measures. Fifty-one medically intractable epilepsy patients completed a comprehensive neuropsychological battery twice, approximately 8 months apart, prior to any invasive monitoring or surgical intervention. First, a Reliable Change (RC) index score was computed for each test score to take into account the reliability of that measure, and a cutoff score was empirically derived to establish the limits of statistically reliable change. These indices were subsequently adjusted for expected practice effects. The second approach used a regression technique to establish "change norms" along a common metric that models both expected practice effects and regression to the mean. The RC index scores provide the clinician with a statistical means of determining whether a patient's retest performance is "significantly" changed from baseline. The regression norms for change allow the clinician to evaluate the magnitude of a given patient's change on 1 or more variables along a common metric that takes into account the reliability and stability of each test measure. Case data illustrate how these methods provide an empirically grounded means for evaluating neurocognitive outcomes following medical interventions such as epilepsy surgery.

  2. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    PubMed

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  3. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project

    PubMed Central

    Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    Objective To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. Materials and methods The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Results Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. Conclusion In this sample and for these questions, SMS was a reliable and valid method for capturing research data. PMID:22539081

  4. A reliability analysis framework with Monte Carlo simulation for weld structure of crane's beam

    NASA Astrophysics Data System (ADS)

    Wang, Kefei; Xu, Hongwei; Qu, Fuzheng; Wang, Xin; Shi, Yanjun

    2018-04-01

    The reliability of the crane product in engineering is the core competitiveness of the product. This paper used Monte Carlo method analyzed the reliability of the weld metal structure of the bridge crane whose limit state function is mathematical expression. Then we obtained the minimum reliable welding feet height value for the welds between cover plate and web plate on main beam in different coefficients of variation. This paper provides a new idea and reference for the growth of the inherent reliability of crane.

  5. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  6. Graph modeling systems and methods

    DOEpatents

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  7. National audit of continence care: laying the foundation.

    PubMed

    Mian, Sarah; Wagg, Adrian; Irwin, Penny; Lowe, Derek; Potter, Jonathan; Pearson, Michael

    2005-12-01

    National audit provides a basis for establishing performance against national standards, benchmarking against other service providers and improving standards of care. For effective audit, clinical indicators are required that are valid, feasible to apply and reliable. This study describes the methods used to develop clinical indicators of continence care in preparation for a national audit. To describe the methods used to develop and test clinical indicators of continence care with regard to validity, feasibility and reliability. A multidisciplinary working group developed clinical indicators that measured the structure, process and outcome of care as well as case-mix variables. Literature searching, consensus workshops and a Delphi process were used to develop the indicators. The indicators were tested in 15 secondary care sites, 15 primary care sites and 15 long-term care settings. The process of development produced indicators that received a high degree of consensus within the Delphi process. Testing of the indicators demonstrated an internal reliability of 0.7 and an external reliability of 0.6. Data collection required significant investment in terms of staff time and training. The method used produced indicators that achieved a high degree of acceptance from health care professionals. The reliability of data collection was high for this audit and was similar to the level seen in other successful national audits. Data collection for the indicators was feasible to collect, however, issues of time and staffing were identified as limitations to such data collection. The study has described a systematic method for developing clinical indicators for national audit. The indicators proved robust and reliable in primary and secondary care as well as long-term care settings.

  8. How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.

    PubMed

    Gray, Kurt

    2017-09-01

    Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.

  9. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  10. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  11. A human reliability based usability evaluation method for safety-critical software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less

  12. Comparing capacity value estimation techniques for photovoltaic solar power

    DOE PAGES

    Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul

    2012-09-28

    In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less

  13. Training and Maintaining System-Wide Reliability in Outcome Management.

    PubMed

    Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E

    2014-01-01

    The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.

  14. Limit states and reliability-based pipeline design. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during developmentmore » and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.« less

  15. Long-term Mechanical Circulatory Support System reliability recommendation by the National Clinical Trial Initiative subcommittee.

    PubMed

    Lee, James

    2009-01-01

    The Long-Term Mechanical Circulatory Support (MCS) System Reliability Recommendation was published in the American Society for Artificial Internal Organs (ASAIO) Journal and the Annals of Thoracic Surgery in 1998. At that time, it was stated that the document would be periodically reviewed to assess its timeliness and appropriateness within 5 years. Given the wealth of clinical experience in MCS systems, a new recommendation has been drafted by consensus of a group of representatives from the medical community, academia, industry, and government. The new recommendation describes a reliability test methodology and provides detailed reliability recommendations. In addition, the new recommendation provides additional information and clinical data in appendices that are intended to assist the reliability test engineer in the development of a reliability test that is expected to give improved predictions of clinical reliability compared with past test methods. The appendices are available for download at the ASAIO journal web site at www.asaiojournal.com.

  16. Integrating reliability and maintainability into a concurrent engineering environment

    NASA Astrophysics Data System (ADS)

    Phillips, Clifton B.; Peterson, Robert R.

    1993-02-01

    This paper describes the results of a reliability and maintainability study conducted at the University of California, San Diego and supported by private industry. Private industry thought the study was important and provided the university access to innovative tools under cooperative agreement. The current capability of reliability and maintainability tools and how they fit into the design process is investigated. The evolution of design methodologies leading up to today's capability is reviewed for ways to enhance the design process while keeping cost under control. A method for measuring the consequences of reliability and maintainability policy for design configurations in an electronic environment is provided. The interaction of selected modern computer tool sets is described for reliability, maintainability, operations, and other elements of the engineering design process. These tools provide a robust system evaluation capability that brings life cycle performance improvement information to engineers and their managers before systems are deployed, and allow them to monitor and track performance while it is in operation.

  17. The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.

    PubMed

    O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard

    2010-08-01

    The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.

  18. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  19. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  20. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  1. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project.

    PubMed

    Whitford, Heather M; Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. In this sample and for these questions, SMS was a reliable and valid method for capturing research data.

  2. Do McKinnon lists provide reliable data in bird species frequency? A comparison with transect-based data

    NASA Astrophysics Data System (ADS)

    Cento, Michele; Scrocca, Roberto; Coppola, Michele; Rossi, Maurizio; Di Giuseppe, Riccardo; Battisti, Corrado; Luiselli, Luca; Amori, Giovanni

    2018-05-01

    Although occurrence-based listing methods could provide reliable lists of species composition for a site, the effective reliability of this method to provide more detailed information about species frequency (and abundance) has been rarely tested. In this paper, we compared the species frequencies obtained for the same set of species-rich sites (wetlands of central Italy) from two different methods: McKinnon lists and line transects. In all sites we observed: (i) rapid cumulating curves of line transect abundance frequencies toward the asymptote represented by the maximum value in McKinnon occurrence frequency; (ii) a large amount of species having a low frequency with line transect method showing a high range of variation in frequency obtained by McKinnon lists; (iii) a set of species having a subdominant (>0.02-<0.05) and dominant species (>0.05) frequency with line transect showed all the highest value in McKinnon frequency. McKinnon lists provides only a coarse-grained proxy of species frequency of individuals distinguishing only between common species (having the highest values of McKinnon frequency) and rare species (all the other species). Although McKinnon lists have some points of strength, this method does not discriminate the frequencies inside the subset of common species (sub-dominant and dominant species). Therefore, we suggest a cautionary approach when McKinnon frequencies should be used to obtain complex univariate metrics of diversity.

  3. Reliability and Validity of 10 Different Standard Setting Procedures.

    ERIC Educational Resources Information Center

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  4. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    PubMed

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  5. Normative Data for an Instrumental Assessment of the Upper-Limb Functionality.

    PubMed

    Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco

    2015-01-01

    Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment.

  6. Normative Data for an Instrumental Assessment of the Upper-Limb Functionality

    PubMed Central

    Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco

    2015-01-01

    Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment. PMID:26539500

  7. Real-Time GNSS-Based Attitude Determination in the Measurement Domain.

    PubMed

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-02-05

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.

  8. Distributed collaborative response surface method for mechanical dynamic assembly reliability design

    NASA Astrophysics Data System (ADS)

    Bai, Guangchen; Fei, Chengwei

    2013-11-01

    Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.

  9. Application of the differential decay-curve method to γ-γ fast-timing lifetime measurements

    NASA Astrophysics Data System (ADS)

    Petkov, P.; Régis, J.-M.; Dewald, A.; Kisyov, S.

    2016-10-01

    A new procedure for the analysis of delayed-coincidence lifetime experiments focused on the Fast-timing case is proposed following the approach of the Differential decay-curve method. Examples of application of the procedure on experimental data reveal its reliability for lifetimes even in the sub-nanosecond range. The procedure is expected to improve both precision/reliability and treatment of systematic errors and scarce data as well as to provide an option for cross-check with the results obtained by means of other analyzing methods.

  10. Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability

    NASA Technical Reports Server (NTRS)

    Kolodziej, P.; Rasky, D. J.

    2002-01-01

    In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.

  11. Validity and Reliability of Visual Analog Scaling for Assessment of Hypernasality and Audible Nasal Emission in Children With Repaired Cleft Palate.

    PubMed

    Baylis, Adriane; Chapman, Kathy; Whitehill, Tara L; Group, The Americleft Speech

    2015-11-01

    To investigate the validity and reliability of multiple listener judgments of hypernasality and audible nasal emission, in children with repaired cleft palate, using visual analog scaling (VAS) and equal-appearing interval (EAI) scaling. Prospective comparative study of multiple listener ratings of hypernasality and audible nasal emission. Multisite institutional. Five trained and experienced speech-language pathologist listeners from the Americleft Speech Project. Average VAS and EAI ratings of hypernasality and audible nasal emission/turbulence for 12 video-recorded speech samples from the Americleft Speech Project. Intrarater and interrater reliability was computed, as well as linear and polynomial models of best fit. Intrarater and interrater reliability was acceptable for both rating methods; however, reliability was higher for VAS as compared to EAI ratings. When VAS ratings were plotted against EAI ratings, results revealed a stronger curvilinear relationship. The results of this study provide additional evidence that alternate rating methods such as VAS may offer improved validity and reliability over EAI ratings of speech. VAS should be considered a viable method for rating hypernasality and nasal emission in speech in children with repaired cleft palate.

  12. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  13. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  14. Clinical methods to quantify trunk mobility in an elite male surfing population.

    PubMed

    Furness, James; Climstein, Mike; Sheppard, Jeremy M; Abbott, Allan; Hing, Wayne

    2016-05-01

    Thoracic mobility in the sagittal and horizontal planes are key requirements in the sport of surfing; however to date the normal values of these movements have not yet been quantified in a surfing population. To develop a reliable method to quantify thoracic mobility in the sagittal plane; to assess the reliability of an existing thoracic rotation method, and quantify thoracic mobility in an elite male surfing population. Clinical Measurement, reliability and comparative study. A total of 30 subjects were used to determine the reliability component. 15 elite surfers were used as part of a comparative analysis with age and gender matched controls. Intraclass correlation coefficient values ranged between 0.95-0.99 (95% CI; 0.89-0.99) for both thoracic methods. The elite surfing group had significantly (p ≤ 0.05) greater rotation than the comparative group (mean rotation 63.57° versus 40.80°, respectively). This study has illustrated reliable methods to assess the thoracic spine in the sagittal plane and thoracic rotation. It has also quantified ROM in a surfing cohort; identifying thoracic rotation as a key movement. This information may provide clinicians, coaches and athletic trainers with imperative information regarding the importance of maintaining adequate thoracic rotation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…

  16. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment

    PubMed Central

    Seo, Aria; Kim, Yeichang

    2017-01-01

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709

  17. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.

    PubMed

    Seo, Aria; Jeong, Junho; Kim, Yeichang

    2017-08-13

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.

  18. The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.

    2010-01-01

    HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.

  19. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  20. An alternative to the balance error scoring system: using a low-cost balance board to improve the validity/reliability of sports-related concussion balance testing.

    PubMed

    Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J

    2014-05-01

    Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.

  1. Use of the smartphone for end vertebra selection in scoliosis.

    PubMed

    Pepe, Murad; Kocadal, Onur; Iyigun, Abdullah; Gunes, Zafer; Aksahin, Ertugrul; Aktekin, Cem Nuri

    2017-03-01

    The aim of our study was to develop a smartphone-aided end vertebra selection method and to investigate its effectiveness in Cobb angle measurement. Twenty-nine adolescent idiopathic scoliosis patients' pre-operative posteroanterior scoliosis radiographs were used for end vertebra selection and Cobb angle measurement by standard method and smartphone-aided method. Measurements were performed by 7 examiners. The intraclass correlation coefficient was used to analyze selection and measurement reliability. Summary statistics of variance calculations were used to provide 95% prediction limits for the error in Cobb angle measurements. A paired 2-tailed t test was used to analyze end vertebra selection differences. Mean absolute Cobb angle difference was 3.6° for the manual method and 1.9° for the smartphone-aided method. Both intraobserver and interobserver reliability were found excellent in manual and smartphone set for Cobb angle measurement. Both intraobserver and interobserver reliability were found excellent in manual and smartphone set for end vertebra selection. But reliability values of manual set were lower than smartphone. Two observers selected significantly different end vertebra in their repeated selections for manual method. Smartphone-aided method for end vertebra selection and Cobb angle measurement showed excellent reliability. We can expect a reduction in measurement error rates with the widespread use of this method in clinical practice. Level III, Diagnostic study. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  2. How to: identify non-tuberculous Mycobacterium species using MALDI-TOF mass spectrometry.

    PubMed

    Alcaide, F; Amlerová, J; Bou, G; Ceyssens, P J; Coll, P; Corcoran, D; Fangous, M-S; González-Álvarez, I; Gorton, R; Greub, G; Hery-Arnaud, G; Hrábak, J; Ingebretsen, A; Lucey, B; Marekoviċ, I; Mediavilla-Gradolph, C; Monté, M R; O'Connor, J; O'Mahony, J; Opota, O; O'Reilly, B; Orth-Höller, D; Oviaño, M; Palacios, J J; Palop, B; Pranada, A B; Quiroga, L; Rodríguez-Temporal, D; Ruiz-Serrano, M J; Tudó, G; Van den Bossche, A; van Ingen, J; Rodriguez-Sanchez, B

    2018-06-01

    The implementation of MALDI-TOF MS for microorganism identification has changed the routine of the microbiology laboratories as we knew it. Most microorganisms can now be reliably identified within minutes using this inexpensive, user-friendly methodology. However, its application in the identification of mycobacteria isolates has been hampered by the structure of their cell wall. Improvements in the sample processing method and in the available database have proved key factors for the rapid and reliable identification of non-tuberculous mycobacteria isolates using MALDI-TOF MS. The main objective is to provide information about the proceedings for the identification of non-tuberculous isolates using MALDI-TOF MS and to review different sample processing methods, available databases, and the interpretation of the results. Results from relevant studies on the use of the available MALDI-TOF MS instruments, the implementation of innovative sample processing methods, or the implementation of improved databases are discussed. Insight about the methodology required for reliable identification of non-tuberculous mycobacteria and its implementation in the microbiology laboratory routine is provided. Microbiology laboratories where MALDI-TOF MS is available can benefit from its capacity to identify most clinically interesting non-tuberculous mycobacteria in a rapid, reliable, and inexpensive manner. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  3. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  4. A method for recording verbal behavior in free-play settings.

    PubMed

    Nordquist, V M

    1971-01-01

    The present study attempted to test the reliability of a new method of recording verbal behavior in a free-play preschool setting. Six children, three normal and three speech impaired, served as subjects. Videotaped records of verbal behavior were scored by two experimentally naive observers. The results suggest that the system provides a means of obtaining reliable records of both normal and impaired speech, even when the subjects exhibit nonverbal behaviors (such as hyperactivity) that interfere with direct observation techniques.

  5. A proposed method to investigate reliability throughout a questionnaire

    PubMed Central

    2011-01-01

    Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842

  6. Experimental application of OMA solutions on the model of industrial structure

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mironovs, D.

    2017-10-01

    It is very important and sometimes even vital to maintain reliability of industrial structures. High quality control during production and structural health monitoring (SHM) in exploitation provides reliable functioning of large, massive and remote structures, like wind generators, pipelines, power line posts, etc. This paper introduces a complex of technological and methodical solutions for SHM and diagnostics of industrial structures, including those that are actuated by periodic forces. Solutions were verified on a wind generator scaled model with integrated system of piezo-film deformation sensors. Simultaneous and multi-patch Operational Modal Analysis (OMA) approaches were implemented as methodical means for structural diagnostics and monitoring. Specially designed data processing algorithms provide objective evaluation of structural state modification.

  7. A Valid and Reliable Instrument for Cognitive Complexity Rating Assignment of Chemistry Exam Items

    ERIC Educational Resources Information Center

    Knaus, Karen; Murphy, Kristen; Blecking, Anja; Holme, Thomas

    2011-01-01

    The design and use of a valid and reliable instrument for the assignment of cognitive complexity ratings to chemistry exam items is described in this paper. Use of such an instrument provides a simple method to quantify the cognitive demands of chemistry exam items. Instrument validity was established in two different ways: statistically…

  8. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  9. Photovoltaic power system reliability considerations

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.

    1980-01-01

    An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

  10. Examining the reliability of ADAS-Cog change scores.

    PubMed

    Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L

    2016-09-01

    The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.

  11. Real-Time GNSS-Based Attitude Determination in the Measurement Domain

    PubMed Central

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-01-01

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434

  12. Development of a hybrid pollution index for heavy metals in marine and estuarine sediments.

    PubMed

    Brady, James P; Ayoko, Godwin A; Martens, Wayde N; Goonetilleke, Ashantha

    2015-05-01

    Heavy metal pollution of sediments is a growing concern in most parts of the world, and numerous studies focussed on identifying contaminated sediments by using a range of digestion methods and pollution indices to estimate sediment contamination have been described in the literature. The current work provides a critical review of the more commonly used sediment digestion methods and identifies that weak acid digestion is more likely to provide guidance on elements that are likely to be bioavailable than other traditional methods of digestion. This work also reviews common pollution indices and identifies the Nemerow Pollution Index as the most appropriate method for establishing overall sediment quality. Consequently, a modified Pollution Index that can lead to a more reliable understanding of whole sediment quality is proposed. This modified pollution index is then tested against a number of existing studies and demonstrated to give a reliable and rapid estimate of sediment contamination and quality.

  13. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  14. Motivational Interviewing Skills in Health Care Encounters (MISHCE): Development and psychometric testing of an assessment tool.

    PubMed

    Petrova, Tatjana; Kavookjian, Jan; Madson, Michael B; Dagley, John; Shannon, David; McDonough, Sharon K

    2015-01-01

    Motivational interviewing (MI) has demonstrated a significant impact as an intervention strategy for addiction management, change in lifestyle behaviors, and adherence to prescribed medication and other treatments. Key elements to studying MI include training in MI of professionals who will use it, assessment of skills acquisition in trainees, and the use of a validated skills assessment tool. The purpose of this research project was to develop a psychometrically valid and reliable tool that has been designed to assess MI skills competence in health care provider trainees. The goal was to develop an assessment tool that would evaluate the acquisition and use of specific MI skills and principles, as well as the quality of the patient-provider therapeutic alliance in brief health care encounters. To address this purpose, specific steps were followed, beginning with a literature review. This review contributed to the development of relevant conceptual and operational definitions, selecting a scaling technique and response format, and methods for analyzing validity and reliability. Internal consistency reliability was established on 88 video recorded interactions. The inter-rater and test-retest reliability were established using randomly selected 18 from the 88 interactions. The assessment tool Motivational Interviewing Skills for Health Care Encounters (MISHCE) and a manual for use of the tool were developed. Validity and reliability of MISHCE were examined. Face and content validity were supported with well-defined conceptual and operational definitions and feedback from an expert panel. Reliability was established through internal consistency, inter-rater reliability, and test-retest reliability. The overall internal consistency reliability (Cronbach's alpha) for all fifteen items was 0.75. MISHCE demonstrated good inter-rater reliability and good to excellent test-retest reliability. MISHCE assesses the health provider's level of knowledge and skills in brief disease management encounters. MISHCE also evaluates quality of the patient-provider therapeutic alliance, i.e., the "flow" of the interaction. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Reliability design and verification for launch-vehicle propulsion systems - Report of an AIAA Workshop, Washington, DC, May 16, 17, 1989

    NASA Astrophysics Data System (ADS)

    Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.

  16. A proposed method to investigate reliability throughout a questionnaire.

    PubMed

    Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H

    2011-10-05

    Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.

  17. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach

    NASA Astrophysics Data System (ADS)

    GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan

    2018-02-01

    Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.

  19. Methods for assessing reliability and validity for a measurement tool: a case study and critique using the WHO haemoglobin colour scale.

    PubMed

    White, Sarah A; van den Broek, Nynke R

    2004-05-30

    Before introducing a new measurement tool it is necessary to evaluate its performance. Several statistical methods have been developed, or used, to evaluate the reliability and validity of a new assessment method in such circumstances. In this paper we review some commonly used methods. Data from a study that was conducted to evaluate the usefulness of a specific measurement tool (the WHO Colour Scale) is then used to illustrate the application of these methods. The WHO Colour Scale was developed under the auspices of the WHO to provide a simple portable and reliable method of detecting anaemia. This Colour Scale is a discrete interval scale, whereas the actual haemoglobin values it is used to estimate are on a continuous interval scale and can be measured accurately using electrical laboratory equipment. The methods we consider are: linear regression, correlation coefficients, paired t-tests plotting differences against mean values and deriving limits of agreement; kappa and weighted kappa statistics, sensitivity and specificity, an intraclass correlation coefficient and the repeatability coefficient. We note that although the definition and properties of each of these methods is well established inappropriate methods continue to be used in medical literature for assessing reliability and validity, as evidenced in the context of the evaluation of the WHO Colour Scale. Copyright 2004 John Wiley & Sons, Ltd.

  20. What is the best method for assessing lower limb force-velocity relationship?

    PubMed

    Giroux, C; Rabita, G; Chollet, D; Guilhem, G

    2015-02-01

    This study determined the concurrent validity and reliability of force, velocity and power measurements provided by accelerometry, linear position transducer and Samozino's methods, during loaded squat jumps. 17 subjects performed squat jumps on 2 separate occasions in 7 loading conditions (0-60% of the maximal concentric load). Force, velocity and power patterns were averaged over the push-off phase using accelerometry, linear position transducer and a method based on key positions measurements during squat jump, and compared to force plate measurements. Concurrent validity analyses indicated very good agreement with the reference method (CV=6.4-14.5%). Force, velocity and power patterns comparison confirmed the agreement with slight differences for high-velocity movements. The validity of measurements was equivalent for all tested methods (r=0.87-0.98). Bland-Altman plots showed a lower agreement for velocity and power compared to force. Mean force, velocity and power were reliable for all methods (ICC=0.84-0.99), especially for Samozino's method (CV=2.7-8.6%). Our findings showed that present methods are valid and reliable in different loading conditions and permit between-session comparisons and characterization of training-induced effects. While linear position transducer and accelerometer allow for examining the whole time-course of kinetic patterns, Samozino's method benefits from a better reliability and ease of processing. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Evaluation of hydrate-screening methods.

    PubMed

    Cui, Yong; Yao, Erica

    2008-07-01

    The purpose of this work is to evaluate the effectiveness and reliability of several common hydrate-screening techniques, and to provide guidelines for designing hydrate-screening programs for new drug candidates. Ten hydrate-forming compounds were selected as model compounds and six hydrate-screening approaches were applied to these compounds in an effort to generate their hydrate forms. The results prove that no screening approach is universally effective in finding hydrates for small organic compounds. Rather, a combination of different methods should be used to improve screening reliability. Among the approaches tested, the dynamic water vapor sorption/desorption isotherm (DVI) method and storage under high humidity (HH) yielded 60-70% success ratios, the lowest among all techniques studied. The risk of false negatives arises in particular for nonhygroscopic compounds. On the other hand, both slurry in water (Slurry) and temperature cycling of aqueous suspension (TCS) showed high success rates (90%) with some exceptions. The mixed solvent systems (MSS) procedure also achieved high success rates (90%), and was found to be more suitable for water-insoluble compounds. For water-soluble compounds, MSS may not be the best approach because recrystallization is difficult in solutions with high water activity. Finally, vapor diffusion (VD) yielded a reasonably high success ratio in finding hydrates (80%). However, this method suffers from experimental difficulty and unreliable results for either highly water-soluble or water-insoluble compounds. This study indicates that a reliable hydrate-screening strategy should take into consideration the solubility and hygroscopicity of the compounds studied. A combination of the Slurry or TCS method with the MSS procedure could provide a screening strategy with reasonable reliability.

  2. Solving Nonlinear Fractional Differential Equation by Generalized Mittag-Leffler Function Method

    NASA Astrophysics Data System (ADS)

    Arafa, A. A. M.; Rida, S. Z.; Mohammadein, A. A.; Ali, H. M.

    2013-06-01

    In this paper, we use Mittag—Leffler function method for solving some nonlinear fractional differential equations. A new solution is constructed in power series. The fractional derivatives are described by Caputo's sense. To illustrate the reliability of the method, some examples are provided.

  3. Traditional and nontraditional internships in government

    NASA Technical Reports Server (NTRS)

    Stohrer, Freda F.; Pinelli, Thomas E.

    1980-01-01

    Traditional and nontraditional methods for training technical writers-editors within the federal government are discussed. It is concluded that cooperative education that combines work experience with classroom instruction provides an excellent method for locating and training competent and reliable young professionals.

  4. The Noninvasive Measurement of X-Ray Tube Potential.

    NASA Astrophysics Data System (ADS)

    Ranallo, Frank Nunzio

    In this thesis I briefly describe the design of clinical x-ray imaging systems and also the various methods of measuring x-ray tube potential, both invasive and noninvasive. I also discuss the meaning and usage of the quantities tube potential (kV) and peak tube potential (kVp) with reference to x-ray systems used in medical imaging. I propose that there exist several quantities which describe different important aspects of the tube potential as a function of time. These quantities are measurable and can be well defined. I have developed a list of definitions of these quantities along with suggested names and symbols. I describe the development and physical principles of a superior noninvasive method of tube potential measurement along with the instrumentation used to implement this method. This thesis research resulted in the development of several commercial kVp test devices (or "kVp Meters") for which the actual measurement procedure is simple, rapid, and reliable compared to other methods, invasive or noninvasive. These kVp test devices provide measurements with a high level of accuracy and reliability over a wide range of test conditions. They provide results which are more reliable and clinically meaningful than many other, more primary and invasive methods. The errors inherent in these new kVp test devices were investigated and methods to minimize them are discussed.

  5. The reliability and validity of measurements of human dental casts made by an intra-oral 3D scanner, with conventional hand-held digital callipers as the comparison measure.

    PubMed

    Rajshekar, Mithun; Julian, Roberta; Williams, Anne-Marie; Tennant, Marc; Forrest, Alex; Walsh, Laurence J; Wilson, Gary; Blizzard, Leigh

    2017-09-01

    Intra-oral 3D scanning of dentitions has the potential to provide a fast, accurate and non-invasive method of recording dental information. The aim of this study was to assess the reliability of measurements of human dental casts made using a portable intra-oral 3D scanner appropriate for field use. Two examiners each measured 84 tooth and 26 arch features of 50 sets of upper and lower human dental casts using digital hand-held callipers, and secondly using the measuring tool provided with the Zfx IntraScan intraoral 3D scanner applied to the virtual dental casts. The measurements were repeated at least one week later. Reliability and validity were quantified concurrently by calculation of intra-class correlation coefficients (ICC) and standard errors of measurement (SEM). The measurements of the 110 landmark features of human dental casts made using the intra-oral 3D scanner were virtually indistinguishable from measurements of the same features made using conventional hand-held callipers. The difference of means as a percentage of the average of the measurements by each method ranged between 0.030% and 1.134%. The intermethod SEMs ranged between 0.037% and 0.535%, and the inter-method ICCs ranged between 0.904 and 0.999, for both the upper and the lower arches. The inter-rater SEMs were one-half and the intra-method/rater SEMs were one-third of the inter-method values. This study demonstrates that the Zfx IntraScan intra-oral 3D scanner with its virtual on-screen measuring tool is a reliable and valid method for measuring the key features of dental casts. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Candida bloodstream infection: a clinical microbiology laboratory perspective.

    PubMed

    Pongrácz, Júlia; Kristóf, Katalin

    2014-09-01

    The incidence of Candida bloodstream infection (BSI) has been on the rise in several countries worldwide. Species distribution is changing; an increase in the percentage of non-albicans species, mainly fluconazole non-susceptible C. glabrata was reported. Existing microbiology diagnostic methods lack sensitivity, and new methods need to be developed or further evaluation for routine application is necessary. Although reliable, standardized methods for antifungal susceptibility testing are available, the determination of clinical breakpoints remains challenging. Correct species identification is important and provides information on the intrinsic susceptibility profile of the isolate. Currently, acquired resistance in clinical Candida isolates is rare, but reports indicate that it could be an issue in the future. The role of the clinical microbiology laboratory is to isolate and correctly identify the infective agent and provide relevant and reliable susceptibility data as soon as possible to guide antifungal therapy.

  7. Consistency of clinical biomechanical measures between three different institutions: implications for multi-center biomechanical and epidemiological research.

    PubMed

    Myer, Gregory D; Wordeman, Samuel C; Sugimoto, Dai; Bates, Nathaniel A; Roewer, Benjamin D; Medina McKeon, Jennifer M; DiCesare, Christopher A; Di Stasi, Stephanie L; Barber Foss, Kim D; Thomas, Staci M; Hewett, Timothy E

    2014-05-01

    Multi-center collaborations provide a powerful alternative to overcome the inherent limitations to single-center investigations. Specifically, multi-center projects can support large-scale prospective, longitudinal studies that investigate relatively uncommon outcomes, such as anterior cruciate ligament injury. This project was conceived to assess within- and between-center reliability of an affordable, clinical nomogram utilizing two-dimensional video methods to screen for risk of knee injury. The authors hypothesized that the two-dimensional screening methods would provide good-to-excellent reliability within and between institutions for assessment of frontal and sagittal plane biomechanics. Nineteen female, high school athletes participated. Two-dimensional video kinematics of the lower extremity during a drop vertical jump task were collected on all 19 study participants at each of the three facilities. Within-center and between-center reliability were assessed with intra- and inter-class correlation coefficients. Within-center reliability of the clinical nomogram variables was consistently excellent, but between-center reliability was fair-to-good. Within-center intra-class correlation coefficient for all nomogram variables combined was 0.98, while combined between-center inter-class correlation coefficient was 0.63. Injury risk screening protocols were reliable within and repeatable between centers. These results demonstrate the feasibility of multi-site biomechanical studies and establish a framework for further dissemination of injury risk screening algorithms. Specifically, multi-center studies may allow for further validation and optimization of two-dimensional video screening tools. 2b.

  8. 77 FR 39895 - New Analytic Methods and Sampling Procedures for the United States National Residue Program for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...

  9. Japanese Adaptation of the Stroke and Aphasia Quality of Life Scale-39 (SAQOL-39): Comparative Study among Different Types of Aphasia.

    PubMed

    Kamiya, Akane; Kamiya, Kentaro; Tatsumi, Hiroshi; Suzuki, Makihiko; Horiguchi, Satoshi

    2015-11-01

    We have developed a Japanese version of the Stroke and Aphasia Quality of Life Scale-39 (SAQOL-39), designated as SAQOL-39-J, and used psychometric methods to examine its acceptability and reliability. The acceptability and reliability of SAQOL-39-J, which was developed from the English version using a standard translation and back-translation method, were examined in 54 aphasia patients using standard psychometric methods. The acceptability and reliability of SAQOL-39-J were then compared among patients with different types of aphasia. SAQOL-39-J showed good acceptability, internal consistency (Cronbach's α score = .90), and test-retest reliability (intraclass correlation coefficient = .97). Broca's aphasia patients showed the lowest total scores and communication scores on SAQOL-39-J. The Japanese version of SAQOL-39, SAQOL-39-J, provides acceptable and reliable data in Japanese stroke patients with aphasia. Among different types of aphasia, Broca's aphasia patients had the lowest total and communication SAQOL-39-J scores. Further studies are needed to assess the effectiveness of health care interventions on health-related quality of life in this population. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  10. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    PubMed

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart

    2017-03-01

    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  11. Measurement methods to assess diastasis of the rectus abdominis muscle (DRAM): A systematic review of their measurement properties and meta-analytic reliability generalisation.

    PubMed

    van de Water, A T M; Benjamin, D R

    2016-02-01

    Systematic literature review. Diastasis of the rectus abdominis muscle (DRAM) has been linked with low back pain, abdominal and pelvic dysfunction. Measurement is used to either screen or to monitor DRAM width. Determining which methods are suitable for screening and monitoring DRAM is of clinical value. To identify the best methods to screen for DRAM presence and monitor DRAM width. AMED, Embase, Medline, PubMed and CINAHL databases were searched for measurement property studies of DRAM measurement methods. Population characteristics, measurement methods/procedures and measurement information were extracted from included studies. Quality of all studies was evaluated using 'quality rating criteria'. When possible, reliability generalisation was conducted to provide combined reliability estimations. Thirteen studies evaluated measurement properties of the 'finger width'-method, tape measure, calipers, ultrasound, CT and MRI. Ultrasound was most evaluated. Methodological quality of these studies varied widely. Pearson's correlations of r = 0.66-0.79 were found between calipers and ultrasound measurements. Calipers and ultrasound had Intraclass Correlation Coefficients (ICC) of 0.78-0.97 for test-retest, inter- and intra-rater reliability. The 'finger width'-method had weighted Kappa's of 0.73-0.77 for test-retest reliability, but moderate agreement (63%; weighted Kappa = 0.53) between raters. Comparing calipers and ultrasound, low measurement error was found (above the umbilicus), and the methods had good agreement (83%; weighted Kappa = 0.66) for discriminative purposes. The available information support ultrasound and calipers as adequate methods to assess DRAM. For other methods limited measurement information of low to moderate quality is available and further evaluation of their measurement properties is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Treesearch

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  13. A Method for Measuring the Hardness of the Surface Layer on Hot Forging Dies Using a Nanoindenter

    NASA Astrophysics Data System (ADS)

    Mencin, P.; van Tyne, C. J.; Levy, B. S.

    2009-11-01

    The properties and characteristics of the surface layer of forging dies are critical for understanding and controlling wear. However, the surface layer is very thin, and appropriate property measurements are difficult to obtain. The objective of the present study is to determine if nanoindenter testing provides a reliable method, which could be used to measure the surface hardness in forging die steels. To test the reliability of nanoindenter testing, nanoindenter values for two quenched and tempered steels (FX and H13) are compared to microhardness and macrohardness values. These steels were heat treated for various times to produce specimens with different values of hardness. The heat-treated specimens were tested using three different instruments—a Rockwell hardness tester for macrohardness, a Vickers hardness tester for microhardness, and a nanoindenter tester for fine scale evaluation of hardness. The results of this study indicate that nanoindenter values obtained using a Nanoindenter XP Machine with a Berkovich indenter reliably correlate with Rockwell C macrohardness values, and with Vickers HV microhardness values. Consequently, nanoindenter testing can provide reliable results for analyzing the surface layer of hot forging dies.

  14. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  15. Critically re-evaluating a common technique

    PubMed Central

    Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B.

    2016-01-01

    Objectives: (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Methods: Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Results: Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%–90%); specificity was 71%, CI 56%–85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41–0.81); interrater reliability was lower (κ 0.53, CI 0.35–0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI −13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. Conclusions: This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. Classification of evidence: This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. PMID:26701380

  16. Battery Materials Synthesis | Transportation Research | NREL

    Science.gov Websites

    research has achieved greater battery stability through both conventional and innovative methods. The lab's provided innovative and cost-effective methods to mitigate lifespan and reliability concerns. Atomic Layer into an in-line, roll-to-roll format that can be integrated with manufacturing methods. Electrodes

  17. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  18. 77 FR 40866 - Applications for New Awards; Innovative Approaches to Literacy Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-11

    ... supported by the methods that have been employed. The term includes, appropriate to the research being... observational methods that provide reliable data; (iv) making claims of causal relationships only in random...; and (vii) using research designs and methods appropriate to the research question posed...

  19. 75 FR 13515 - Office of Innovation and Improvement (OII); Overview Information; Ready-to-Learn Television...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... on rigorous scientifically based research methods to assess the effectiveness of a particular... activities and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw... or observational methods that provide reliable and valid data across evaluators and observers, across...

  20. A method for recording verbal behavior in free-play settings1

    PubMed Central

    Nordquist, Vey M.

    1971-01-01

    The present study attempted to test the reliability of a new method of recording verbal behavior in a free-play preschool setting. Six children, three normal and three speech impaired, served as subjects. Videotaped records of verbal behavior were scored by two experimentally naive observers. The results suggest that the system provides a means of obtaining reliable records of both normal and impaired speech, even when the subjects exhibit nonverbal behaviors (such as hyperactivity) that interfere with direct observation techniques. ImagesFig. 1Fig. 2 PMID:16795310

  1. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    PubMed

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  2. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    PubMed Central

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  3. Radiographic measurement reliability of lumbar lordosis in ankylosing spondylitis.

    PubMed

    Lee, Jung Sub; Goh, Tae Sik; Park, Shi Hwan; Lee, Hong Seok; Suh, Kuen Tak

    2013-04-01

    Intraobserver and interobserver reliabilities of the several different methods to measure lumbar lordosis have been reported. However, it has not been studied sofar in patients with ankylosing spondylitis (AS). We evaluated the inter and intraobserver reliabilities of six specific measures of global lumbar lordosis in patients with AS. Ninety-one consecutive patients with AS who met the most recently modified New York criteria were enrolled and underwent anteroposterior and lateral radiographs of whole spine. The radiographs were divided into non-ankylosis (no bony bridge in the lumbar spine), incomplete ankylosis (lumbar spines were partially connected by bony bridge) and complete ankylosis groups to evaluate the reliability of the Cobb L1-S1, Cobb L1-L5, centroid, posterior tangent L1-S1, posterior tangent L1-L5, and TRALL methods. The radiographs were composed of 39 non-ankylosis, 27 incomplete ankylosis and 25 complete ankylosis. Intra- and inter-class correlation coefficients (ICCs) of all six methods were generally high. The ICCs were all ≥0.77 (excellent) for the six radiographic methods in the combined group. However, a comparison of the ICCs, 95 % confidence intervals and mean absolute difference (MAD) between groups with varying degrees of ankylosis showed that the reliability of the lordosis measurements decreased in proportion to the severity of ankylosis. The Cobb L1-S1, Cobb L1-L5 and posterior tangent L1-S1 method demonstrated higher ICCs for both inter and intraobserver comparisons and the other methods showed lower ICCs in all groups. The intraobserver MAD was similar in the Cobb L1-S1 and Cobb L1-L5 (2.7°-4.3°), but the other methods showed higher intraobserver MAD. Interobserver MAD of Cobb L1-L5 only showed low in all group. These results are the first to provide a reliability analysis of different global lumbar lordosis measurement methods in AS. The findings in this study demonstrated that the Cobb L1-L5 method is reliable for measuring the global lumbar lordosis in AS.

  4. A stochastic simulation method for the assessment of resistive random access memory retention reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw

    This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.

  5. Reading PDB: perception of molecules from 3D atomic coordinates.

    PubMed

    Urbaczek, Sascha; Kolodzik, Adrian; Groth, Inken; Heuser, Stefan; Rarey, Matthias

    2013-01-28

    The analysis of small molecule crystal structures is a common way to gather valuable information for drug development. The necessary structural data is usually provided in specific file formats containing only element identities and three-dimensional atomic coordinates as reliable chemical information. Consequently, the automated perception of molecular structures from atomic coordinates has become a standard task in cheminformatics. The molecules generated by such methods must be both chemically valid and reasonable to provide a reliable basis for subsequent calculations. This can be a difficult task since the provided coordinates may deviate from ideal molecular geometries due to experimental uncertainties or low resolution. Additionally, the quality of the input data often differs significantly thus making it difficult to distinguish between actual structural features and mere geometric distortions. We present a method for the generation of molecular structures from atomic coordinates based on the recently published NAOMI model. By making use of this consistent chemical description, our method is able to generate reliable results even with input data of low quality. Molecules from 363 Protein Data Bank (PDB) entries could be perceived with a success rate of 98%, a result which could not be achieved with previously described methods. The robustness of our approach has been assessed by processing all small molecules from the PDB and comparing them to reference structures. The complete data set can be processed in less than 3 min, thus showing that our approach is suitable for large scale applications.

  6. Next-generation sequencing is a robust strategy for the high-throughput detection of zygosity in transgenic maize.

    PubMed

    Fritsch, Leonie; Fischer, Rainer; Wambach, Christoph; Dudek, Max; Schillberg, Stefan; Schröper, Florian

    2015-08-01

    Simple and reliable, high-throughput techniques to detect the zygosity of transgenic events in plants are valuable for biotechnology and plant breeding companies seeking robust genotyping data for the assessment of new lines and the monitoring of breeding programs. We show that next-generation sequencing (NGS) applied to short PCR products spanning the transgene integration site provides accurate zygosity data that are more robust and reliable than those generated by PCR-based methods. The NGS reads covered the 5' border of the transgenic events (incorporating part of the transgene and the flanking genomic DNA), or the genomic sequences flanking the unfilled transgene integration site at the wild-type locus. We compared the NGS method to competitive real-time PCR with transgene-specific and wild-type-specific primer/probe pairs, one pair matching the 5' genomic flanking sequence and 5' part of the transgene and the other matching the unfilled transgene integration site. Although both NGS and real-time PCR provided useful zygosity data, the NGS technique was favorable because it needed fewer optimization steps. It also provided statistically more-reliable evidence for the presence of each allele because each product was often covered by more than 100 reads. The NGS method is also more suitable for the genotyping of large panels of plants because up to 80 million reads can be produced in one sequencing run. Our novel method is therefore ideal for the rapid and accurate genotyping of large numbers of samples.

  7. Establishing survey validity and reliability for American Indians through "think aloud" and test-retest methods.

    PubMed

    Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L; Burgess, Katherine M; Puumala, Susan E; Wilton, Georgiana; Hanson, Jessica D

    2015-06-01

    The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a "think aloud" methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test-retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test-retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. © The Author(s) 2015.

  8. Methodology to improve design of accelerated life tests in civil engineering projects.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.

  9. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  10. WEIGHT OF EVIDENCE IN ECOLOGICAL ASSESSMENT

    EPA Science Inventory

    This document provides guidance on methods for weighing ecological evidence using a a standard framework consisting of three steps: assemble evidence, weight evidence and weigh the body of evidence. Use of the methods will improve the consistency and reliability of WoE-based asse...

  11. A hyperspectral image optimizing method based on sub-pixel MTF analysis

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie

    2015-04-01

    Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.

  12. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  13. A Methodological Critique of the ProPublica Surgeon Scorecard

    PubMed Central

    Friedberg, Mark W.; Pronovost, Peter J.; Shahian, David M.; Safran, Dana Gelb; Bilimoria, Karl Y.; Elliott, Marc N.; Damberg, Cheryl L.; Dimick, Justin B.; Zaslavsky, Alan M.

    2016-01-01

    Abstract On July 14, 2015, ProPublica published its Surgeon Scorecard, which displays “Adjusted Complication Rates” for individual, named surgeons for eight surgical procedures performed in hospitals. Public reports of provider performance have the potential to improve the quality of health care that patients receive. A valid performance report can drive quality improvement and usefully inform patients' choices of providers. However, performance reports with poor validity and reliability are potentially damaging to all involved. This article critiques the methods underlying the Scorecard and identifies opportunities for improvement. Until these opportunities are addressed, the authors advise users of the Scorecard—most notably, patients who might be choosing their surgeons—not to consider the Scorecard a valid or reliable predictor of the health outcomes any individual surgeon is likely to provide. The authors hope that this methodological critique will contribute to the development of more-valid and more-reliable performance reports in the future. PMID:28083411

  14. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  15. Developing safety performance functions incorporating reliability-based risk measures.

    PubMed

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  17. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  18. Technical Notes on the Multifactor Method of Elementary School Closing.

    ERIC Educational Resources Information Center

    Puleo, Vincent T.

    This report provides preliminary technical information on a method for analyzing the factors involved in the closing of elementary schools. Included is a presentation of data and a brief discussion bearing on descriptive statistics, reliability, and validity. An intercorrelation matrix is also examined. The method employs 9 factors that have a…

  19. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  20. Estimation of Environment-Related Properties of Chemicals for Design of Sustainable Processes: Development of Group-Contribution+ (GC+) Property Models and Uncertainty Analysis

    EPA Science Inventory

    The aim of this work is to develop group-contribution+ (GC+) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncert...

  1. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis

    PubMed Central

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-01-01

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193

  2. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.

  3. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  4. Who Needs Replication?

    ERIC Educational Resources Information Center

    Porte, Graeme

    2013-01-01

    In this paper, the editor of a recent Cambridge University Press book on research methods discusses replicating previous key studies to throw more light on their reliability and generalizability. Replication research is presented as an accepted method of validating previous research by providing comparability between the original and replicated…

  5. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  6. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  7. Adapting Human Reliability Analysis from Nuclear Power to Oil and Gas Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2015-09-01

    ABSTRACT: Human reliability analysis (HRA), as currently used in risk assessments, largely derives its methods and guidance from application in the nuclear energy domain. While there are many similarities be-tween nuclear energy and other safety critical domains such as oil and gas, there remain clear differences. This paper provides an overview of HRA state of the practice in nuclear energy and then describes areas where refinements to the methods may be necessary to capture the operational context of oil and gas. Many key distinctions important to nuclear energy HRA such as Level 1 vs. Level 2 analysis may prove insignifi-cantmore » for oil and gas applications. On the other hand, existing HRA methods may not be sensitive enough to factors like the extensive use of digital controls in oil and gas. This paper provides an overview of these con-siderations to assist in the adaptation of existing nuclear-centered HRA methods to the petroleum sector.« less

  8. Inner experience in the scanner: can high fidelity apprehensions of inner experience be integrated with fMRI?

    PubMed Central

    Kühn, Simone; Fernyhough, Charles; Alderson-Day, Benjamin; Hurlburt, Russell T.

    2014-01-01

    To provide full accounts of human experience and behavior, research in cognitive neuroscience must be linked to inner experience, but introspective reports of inner experience have often been found to be unreliable. The present case study aimed at providing proof of principle that introspection using one method, descriptive experience sampling (DES), can be reliably integrated with fMRI. A participant was trained in the DES method, followed by nine sessions of sampling within an MRI scanner. During moments where the DES interview revealed ongoing inner speaking, fMRI data reliably showed activation in classic speech processing areas including left inferior frontal gyrus. Further, the fMRI data validated the participant’s DES observations of the experiential distinction between inner speaking and innerly hearing her own voice. These results highlight the precision and validity of the DES method as a technique of exploring inner experience and the utility of combining such methods with fMRI. PMID:25538649

  9. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  10. Validity and inter-observer reliability of subjective hand-arm vibration assessments.

    PubMed

    Coenen, Pieter; Formanoy, Margriet; Douwes, Marjolein; Bosch, Tim; de Kraker, Heleen

    2014-07-01

    Exposure to mechanical vibrations at work (e.g., due to handling powered tools) is a potential occupational risk as it may cause upper extremity complaints. However, reliable and valid assessment methods for vibration exposure at work are lacking. Measuring hand-arm vibration objectively is often difficult and expensive, while often used information provided by manufacturers lacks detail. Therefore, a subjective hand-arm vibration assessment method was tested on validity and inter-observer reliability. In an experimental protocol, sixteen tasks handling powered tools were executed by two workers. Hand-arm vibration was assessed subjectively by 16 observers according to the proposed subjective assessment method. As a gold standard reference, hand-arm vibration was measured objectively using a vibration measurement device. Weighted κ's were calculated to assess validity, intra-class-correlation coefficients (ICCs) were calculated to assess inter-observer reliability. Inter-observer reliability of the subjective assessments depicting the agreement among observers can be expressed by an ICC of 0.708 (0.511-0.873). The validity of the subjective assessments as compared to the gold-standard reference can be expressed by a weighted κ of 0.535 (0.285-0.785). Besides, the percentage of exact agreement of the subjective assessment compared to the objective measurement was relatively low (i.e., 52% of all tasks). This study shows that subjectively assessed hand-arm vibrations are fairly reliable among observers and moderately valid. This assessment method is a first attempt to use subjective risk assessments of hand-arm vibration. Although, this assessment method can benefit from some future improvement, it can be of use in future studies and in field-based ergonomic assessments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Intrajudge and Interjudge Reliability of the Stuttering Severity Instrument-Fourth Edition.

    PubMed

    Davidow, Jason H; Scott, Kathleen A

    2017-11-08

    The Stuttering Severity Instrument (SSI) is a tool used to measure the severity of stuttering. Previous versions of the instrument have known limitations (e.g., Lewis, 1995). The present study examined the intra- and interjudge reliability of the newest version, the Stuttering Severity Instrument-Fourth Edition (SSI-4) (Riley, 2009). Twelve judges who were trained on the SSI-4 protocol participated. Judges collected SSI-4 data while viewing 4 videos of adults who stutter at Time 1 and 4 weeks later at Time 2. Data were analyzed for intra- and interjudge reliability of the SSI-4 subscores (for Frequency, Duration, and Physical Concomitants), total score, and final severity rating. Intra- and interjudge reliability across the subscores and total score concurred with the manual's reported reliability when reliability was calculated using the methods described in the manual. New calculations of judge agreement produced different values from those in the manual-for the 3 subscores, total score, and final severity rating-and provided data absent from the manual. Clinicians and researchers who use the SSI-4 should carefully consider the limitations of the instrument. Investigation into the multitasking demands of the instrument may provide information on whether separating the collection of data for specific variables will improve intra- and interjudge reliability of those variables.

  12. Probabilistic fatigue methodology for six nines reliability

    NASA Technical Reports Server (NTRS)

    Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf

    1990-01-01

    Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.

  13. FY12 End of Year Report for NEPP DDR2 Reliability

    NASA Technical Reports Server (NTRS)

    Guertin, Steven M.

    2013-01-01

    This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.

  14. Online registration of monthly sports participation after anterior cruciate ligament injury: a reliability and validity study

    PubMed Central

    Grindem, Hege; Eitzen, Ingrid; Snyder-Mackler, Lynn; Risberg, May Arna

    2013-01-01

    Background Current methods measuring sports activity after anterior cruciate ligament (ACL) injury are commonly restricted to the most knee-demanding sport, and do not consider participation in multiple sports. We therefore developed an online activity survey to prospectively record monthly participation in all major sports relevant to our patient-group. Objective To assess the reliability, content validity, and concurrent validity of the survey, and evaluate if it provided more complete data on sports participation than a routine activity questionnaire. Methods One hundred and forty-five consecutively included ACL-injured patients were eligible for the reliability study. The retest of the online activity survey was performed two days after the test response had been recorded. A subsample of 88 ACL-reconstructed patients were included in the validity study. The ACL-reconstructed patients completed the online activity survey from the first to the twelfth postoperative month, and a routine activity questionnaire 6 and 12 months postoperatively. Results The online activity survey was highly reliable (κ ranging from 0.81 to 1). It contained all the common sports reported on the routine activity questionnaire. There was substantial agreement between the two methods on return to preinjury main sport (κ = 0.71 and 0.74 at 6 and 12 months postoperatively). The online activity survey revealed that a significantly higher number of patients reported to participate in running, cycling and strength training, and patients reported to participate in a greater number of sports. Conclusion The online activity survey is a highly reliable way of recording detailed changes in sports participation after ACL injury. The findings of this study support the content and concurrent validity of the survey, and suggest that the online activity survey can provide more complete data on sports participation than a routine activity questionnaire. PMID:23645830

  15. The Effect of Incorrect Reliability Information on Expectations, Perceptions, and Use of Automation.

    PubMed

    Barg-Walkow, Laura H; Rogers, Wendy A

    2016-03-01

    We examined how providing artificially high or low statements about automation reliability affected expectations, perceptions, and use of automation over time. One common method of introducing automation is providing explicit statements about the automation's capabilities. Research is needed to understand how expectations from such introductions affect perceptions and use of automation. Explicit-statement introductions were manipulated to set higher-than (90%), same-as (75%), or lower-than (60%) levels of expectations in a dual-task scenario with 75% reliable automation. Two experiments were conducted to assess expectations, perceptions, compliance, reliance, and task performance over (a) 2 days and (b) 4 days. The baseline assessments showed initial expectations of automation reliability matched introduced levels of expectation. For the duration of each experiment, the lower-than groups' perceptions were lower than the actual automation reliability. However, the higher-than groups' perceptions were no different from actual automation reliability after Day 1 in either study. There were few differences between groups for automation use, which generally stayed the same or increased with experience using the system. Introductory statements describing artificially low automation reliability have a long-lasting impact on perceptions about automation performance. Statements including incorrect automation reliability do not appear to affect use of automation. Introductions should be designed according to desired outcomes for expectations, perceptions, and use of the automation. Low expectations have long-lasting effects. © 2015, Human Factors and Ergonomics Society.

  16. Reliability and validity in measurement of true humeral retroversion by a three-dimensional cylinder fitting method.

    PubMed

    Saka, Masayuki; Yamauchi, Hiroki; Hoshi, Kenji; Yoshioka, Toru; Hamada, Hidetoshi; Gamada, Kazuyoshi

    2015-05-01

    Humeral retroversion is defined as the orientation of the humeral head relative to the distal humerus. Because none of the previous methods used to measure humeral retroversion strictly follow this definition, values obtained by these techniques vary and may be biased by morphologic variations of the humerus. The purpose of this study was 2-fold: to validate a method to define the axis of the distal humerus with a virtual cylinder and to establish the reliability of 3-dimensional (3D) measurement of humeral retroversion by this cylinder fitting method. Humeral retroversion in 14 baseball players (28 humeri) was measured by the 3D cylinder fitting method. The root mean square error was calculated to compare values obtained by a single tester and by 2 different testers using the embedded coordinate system. To establish the reliability, intraclass correlation coefficient (ICC) and precision (standard error of measurement [SEM]) were calculated. The root mean square errors for the humeral coordinate system were <1.0 mm/1.0° for comparison of all translations/rotations obtained by a single tester and <1.0 mm/2.0° for comparison obtained by 2 different testers. Assessment of reliability and precision of the 3D measurement of retroversion yielded an intratester ICC of 0.99 (SEM, 1.0°) and intertester ICC of 0.96 (SEM, 2.8°). The error in measurements obtained by a distal humerus cylinder fitting method was small enough not to affect retroversion measurement. The 3D measurement of retroversion by this method provides excellent intratester and intertester reliability. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  17. FLiGS Score: A New Method of Outcome Assessment for Lip Carcinoma–Treated Patients

    PubMed Central

    Grassi, Rita; Toia, Francesca; Di Rosa, Luigi; Cordova, Adriana

    2015-01-01

    Background: Lip cancer and its treatment have considerable functional and cosmetic effects with resultant nutritional and physical detriments. As we continue to investigate new treatment regimens, we are simultaneously required to assess postoperative outcomes to design interventions that lessen the adverse impact of this disease process. We wish to introduce Functional Lip Glasgow Scale (FLiGS) score as a new method of outcome assessment to measure the effect of lip cancer and its treatment on patients’ daily functioning. Methods: Fifty patients affected by lip squamous cell carcinoma were recruited between 2009 and 2013. Patients were asked to fill the FLiGS questionnaire before surgery, 1 month, 6 months, and 1 year after surgery. The subscores were used to calculate a total FLiGS score of global oral disability. Statistical analysis was performed to test validity and reliability. Results: FLiGS scores improved significantly from preoperative to 12 months postoperative values (P = 0.000). Statistical evidence of validity was provided through rs (Spearman correlation coefficient) that resulted >0.30 for all surveys and for which P < 0.001. FLiGS score reliability was shown through examination of internal consistency and test-retest reliability. Conclusions: FLiGS score is a simple way of assessing functional impairment related to lip cancer before and after surgery; it is sensitive, valid, reliable, and clinically relevant: it provides useful information to orient the physician in the postoperative management and in the rehabilitation program. PMID:26034652

  18. Sediment transport in forested head water catchments - Calibration and validation of a soil erosion and landscape evolution model

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Webb, A. A.; Turner, L.

    2017-11-01

    Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.

  19. Separating Common from Unique Variance Within Emotional Distress: An Examination of Reliability and Relations to Worry.

    PubMed

    Marshall, Andrew J; Evanovich, Emma K; David, Sarah Jo; Mumma, Gregory H

    2018-01-17

    High comorbidity rates among emotional disorders have led researchers to examine transdiagnostic factors that may contribute to shared psychopathology. Bifactor models provide a unique method for examining transdiagnostic variables by modelling the common and unique factors within measures. Previous findings suggest that the bifactor model of the Depression Anxiety and Stress Scale (DASS) may provide a method for examining transdiagnostic factors within emotional disorders. This study aimed to replicate the bifactor model of the DASS, a multidimensional measure of psychological distress, within a US adult sample and provide initial estimates of the reliability of the general and domain-specific factors. Furthermore, this study hypothesized that Worry, a theorized transdiagnostic variable, would show stronger relations to general emotional distress than domain-specific subscales. Confirmatory factor analysis was used to evaluate the bifactor model structure of the DASS in 456 US adult participants (279 females and 177 males, mean age 35.9 years) recruited online. The DASS bifactor model fitted well (CFI = 0.98; RMSEA = 0.05). The General Emotional Distress factor accounted for most of the reliable variance in item scores. Domain-specific subscales accounted for modest portions of reliable variance in items after accounting for the general scale. Finally, structural equation modelling indicated that Worry was strongly predicted by the General Emotional Distress factor. The DASS bifactor model is generalizable to a US community sample and General Emotional Distress, but not domain-specific factors, strongly predict the transdiagnostic variable Worry.

  20. 40 CFR 6.204 - Categorical exclusions and extraordinary circumstances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... procedures for sustainable or “green” procurement) and contracting actions necessary to support the normal... enclosed building, provided that reliable and scientifically-sound methods are used to appropriately...

  1. EVALUATION OF STATIONARY SOURCE PARTICULATE MEASUREMENT METHODS. VOLUME II. OIL-FIRED STEAM GENERATORS

    EPA Science Inventory

    An experimental study was conducted to determine the reliability of the Method 5 procedure for providing particulate emission data from an oil-fired steam generator. The study was concerned with determining whether any 'false' particulate resulted from the collection process of f...

  2. Compendium of Mechanical Limit-States

    NASA Technical Reports Server (NTRS)

    Kowal, Michael

    1996-01-01

    A compendium was compiled and is described to provide a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The different limit-state relationships can be used to analyze the reliability of a candidate mechanical system. In determining the limit-states to be included in the compendium, a comprehensive listing of the possible failure modes that could affect mechanical systems reliability was generated. Previous literature defining mechanical modes of failure was studied, and cited failure modes were included. From this, classifications for failure modes were derived and are described in some detail.

  3. Sugarbeet root maggot resistace from a red globe-shaped beet (PI 179180)

    USDA-ARS?s Scientific Manuscript database

    Sugarbeet root maggot (Tetanops myopaeformis) is a major insect pest of sugarbeet (Beta vulgaris) in many North American production areas. Chemical insecticides have been the primary control method. Host-plant resistance that provides consistent reliable control would provide both an economical and ...

  4. Reliability enhancement of APR + diverse protection system regarding common cause failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Y. G.; Kim, Y. M.; Yim, H. S.

    2012-07-01

    The Advanced Power Reactor Plus (APR +) nuclear power plant design has been developed on the basis of the APR1400 (Advanced Power Reactor 1400 MWe) to further enhance safety and economics. For the mitigation of Anticipated Transients Without Scram (ATWS) as well as Common Cause Failures (CCF) within the Plant Protection System (PPS) and the Emergency Safety Feature - Component Control System (ESF-CCS), several design improvement features have been implemented for the Diverse Protection System (DPS) of the APR + plant. As compared to the APR1400 DPS design, the APR + DPS has been designed to provide the Safety Injectionmore » Actuation Signal (SIAS) considering a large break LOCA accident concurrent with the CCF. Additionally several design improvement features, such as channel structure with redundant processing modules, and changes of system communication methods and auto-system test methods, are introduced to enhance the functional reliability of the DPS. Therefore, it is expected that the APR + DPS can provide an enhanced safety and reliability regarding possible CCF in the safety-grade I and C systems as well as the DPS itself. (authors)« less

  5. Temporal Lobe and “Default” Hemodynamic Brain Modes Discriminate Between Schizophrenia and Bipolar Disorder

    PubMed Central

    Calhoun, Vince D.; Maciejewski, Paul K.; Pearlson, Godfrey D.; Kiehl, Kent A.

    2009-01-01

    Schizophrenia and bipolar disorder are currently diagnosed on the basis of psychiatric symptoms and longitudinal course. The determination of a reliable, biologically-based diagnostic indicator of these diseases (a biomarker) could provide the groundwork for developing more rigorous tools for differential diagnosis and treatment assignment. Recently, methods have been used to identify distinct sets of brain regions or “spatial modes” exhibiting temporally coherent brain activity. Using functional magnetic resonance imaging (fMRI) data and a multivariate analysis method, independent component analysis, we combined the temporal lobe and the default modes to discriminate subjects with bipolar disorder, chronic schizophrenia, and healthy controls. Temporal lobe and default mode networks were reliably identified in all participants. Classification results on an independent set of individuals revealed an average sensitivity and specificity of 90 and 95%, respectively. The use of coherent brain networks such as the temporal lobe and default mode networks may provide a more reliable measure of disease state than task-correlated fMRI activity. A combination of two such hemodynamic brain networks shows promise as a biomarker for schizophrenia and bipolar disorder. PMID:17894392

  6. Temporal lobe and "default" hemodynamic brain modes discriminate between schizophrenia and bipolar disorder.

    PubMed

    Calhoun, Vince D; Maciejewski, Paul K; Pearlson, Godfrey D; Kiehl, Kent A

    2008-11-01

    Schizophrenia and bipolar disorder are currently diagnosed on the basis of psychiatric symptoms and longitudinal course. The determination of a reliable, biologically-based diagnostic indicator of these diseases (a biomarker) could provide the groundwork for developing more rigorous tools for differential diagnosis and treatment assignment. Recently, methods have been used to identify distinct sets of brain regions or "spatial modes" exhibiting temporally coherent brain activity. Using functional magnetic resonance imaging (fMRI) data and a multivariate analysis method, independent component analysis, we combined the temporal lobe and the default modes to discriminate subjects with bipolar disorder, chronic schizophrenia, and healthy controls. Temporal lobe and default mode networks were reliably identified in all participants. Classification results on an independent set of individuals revealed an average sensitivity and specificity of 90 and 95%, respectively. The use of coherent brain networks such as the temporal lobe and default mode networks may provide a more reliable measure of disease state than task-correlated fMRI activity. A combination of two such hemodynamic brain networks shows promise as a biomarker for schizophrenia and bipolar disorder.

  7. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  8. Monitoring visitor use in backcountry and wilderness: a review of methods

    Treesearch

    Steven J. Hollenhorst; Steven A. Whisman; Alan W. Ewert

    1992-01-01

    Obtaining accurate and usable visitor counts in backcountry and wilderness settings continues to be problematic for resource managers because use of these areas is dispersed and costs can be prohibitively high. An overview of the available methods for obtaining reliable data on recreation use levels is provided. Monitoring methods were compared and selection criteria...

  9. A rapid method to assess grape rust mites on leaves and observations from case studies in western Oregon vineyards

    USDA-ARS?s Scientific Manuscript database

    A rapid method for extracting eriophyoid mites was adapted from previous studies to provide growers and IPM consultants with a practical, efficient, and reliable tool to monitor for rust mites in vineyards. The rinse in bag (RIB) method allows quick extraction of mites from collected plant parts (sh...

  10. Multilevel metallization method for fabricating a metal oxide semiconductor device

    NASA Technical Reports Server (NTRS)

    Hollis, B. R., Jr.; Feltner, W. R.; Bouldin, D. L.; Routh, D. E. (Inventor)

    1978-01-01

    An improved method is described of constructing a metal oxide semiconductor device having multiple layers of metal deposited by dc magnetron sputtering at low dc voltages and low substrate temperatures. The method provides multilevel interconnections and cross over between individual circuit elements in integrated circuits without significantly reducing the reliability or seriously affecting the yield.

  11. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  12. The sterilization of endodontic hand files.

    PubMed

    Hurtt, C A; Rossman, L E

    1996-06-01

    Several different methods of file sterilization were analyzed to determine the best method of providing complete file sterility, including the metal shaft and plastic handle. Six test groups of 15 files were studied using Bacillus stearothermophilus as the test organism. Groups were "sterilized" by glutaraldehyde immersion, steam autoclaving, and various techniques of salt sterilization. Only proper steam autoclaving reliably produced completely sterile instruments. Salt sterilization and glutaraldehyde solutions may not be adequate sterilization methods for endodontic hand files and should not be relied on to provide completely sterile instruments.

  13. High sensitivity leak detection method and apparatus

    DOEpatents

    Myneni, Ganapatic R.

    1994-01-01

    An improved leak detection method is provided that utilizes the cyclic adsorption and desorption of accumulated helium on a non-porous metallic surface. The method provides reliable leak detection at superfluid helium temperatures. The zero drift that is associated with residual gas analyzers in common leak detectors is virtually eliminated by utilizing a time integration technique. The sensitivity of the apparatus of this disclosure is capable of detecting leaks as small as 1.times.10.sup.-18 atm cc sec.sup.-1.

  14. High sensitivity leak detection method and apparatus

    DOEpatents

    Myneni, G.R.

    1994-09-06

    An improved leak detection method is provided that utilizes the cyclic adsorption and desorption of accumulated helium on a non-porous metallic surface. The method provides reliable leak detection at superfluid helium temperatures. The zero drift that is associated with residual gas analyzers in common leak detectors is virtually eliminated by utilizing a time integration technique. The sensitivity of the apparatus of this disclosure is capable of detecting leaks as small as 1 [times] 10[sup [minus]18] atm cc sec[sup [minus]1]. 2 figs.

  15. Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores

    PubMed Central

    Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson

    2017-01-01

    Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100

  16. An Evaluation Method of Equipment Reliability Configuration Management

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  17. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  18. To the question about the states of workability for automatic control systems with complicated structure

    NASA Astrophysics Data System (ADS)

    Kuznetsov, P. A.; Kovalev, I. V.; Losev, V. V.; Kalinin, A. O.; Murygin, A. V.

    2016-04-01

    The article discusses the reliability of automated control systems. Analyzes the approach to the classification systems for health States. This approach can be as traditional binary approach, operating with the concept of "serviceability", and other variants of estimation of the system state. This article provides one such option, providing selective evaluation of components for the reliability of the entire system. Introduced description of various automatic control systems and their elements from the point of view of health and risk, mathematical method of determining the transition object from state to state, they differ from each other in the implementation of the objective function. Explores the interplay of elements in different States, the aggregate state of the elements connected in series or in parallel. Are the tables of various logic States and the principles of their calculation in series and parallel connection. Through simulation the proposed approach is illustrated by finding the probability of getting into the system state data in parallel and serially connected elements, with their different probabilities of moving from state to state. In general, the materials of article will be useful for analyzing of the reliability the automated control systems and engineering of the highly-reliable systems. Thus, this mechanism to determine the State of the system provides more detailed information about it and allows a selective approach to the reliability of the system as a whole. Detailed results when assessing the reliability of the automated control systems allows the engineer to make an informed decision when designing means of improving reliability.

  19. The Alzheimer's Disease Knowledge Scale: Development and Psychometric Properties

    ERIC Educational Resources Information Center

    Carpenter, Brian D.; Balsis, Steve; Otilingam, Poorni G.; Hanson, Priya K.; Gatz, Margaret

    2009-01-01

    Purpose: This study provides preliminary evidence for the acceptability, reliability, and validity of the new Alzheimer's Disease Knowledge Scale (ADKS), a content and psychometric update to the Alzheimer's Disease Knowledge Test. Design and Methods: Traditional scale development methods were used to generate items and evaluate their psychometric…

  20. [Methods for measuring skin aging].

    PubMed

    Zieger, M; Kaatz, M

    2016-02-01

    Aging affects human skin and is becoming increasingly important with regard to medical, social and aesthetic issues. Detection of intrinsic and extrinsic components of skin aging requires reliable measurement methods. Modern techniques, e.g., based on direct imaging, spectroscopy or skin physiological measurements, provide a broad spectrum of parameters for different applications.

  1. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  2. Infared beak treatment method compared with conventional hot blade amputation in laying hens

    USDA-ARS?s Scientific Manuscript database

    Infrared lasers have been widely used for noninvasive surgical applications in human medicine and their results are reliable, predictable and reproducible. Infrared lasers have recently been designed with the expressed purpose of providing a less painful, more precise beak trimming method compared w...

  3. Development of a method for measuring femoral torsion using real-time ultrasound.

    PubMed

    Hafiz, Eliza; Hiller, Claire E; Nicholson, Leslie L; Nightingale, E Jean; Clarke, Jillian L; Grimaldi, Alison; Eisenhuth, John P; Refshauge, Kathryn M

    2014-07-01

    Excessive femoral torsion has been associated with various musculoskeletal and neurological problems. To explore this relationship, it is essential to be able to measure femoral torsion in the clinic accurately. Computerized tomography (CT) and magnetic resonance imaging (MRI) are thought to provide the most accurate measurements but CT involves significant radiation exposure and MRI is expensive. The aim of this study was to design a method for measuring femoral torsion in the clinic, and to determine the reliability of this method. Details of design process, including construction of a jig, the protocol developed and the reliability of the method are presented. The protocol developed used ultrasound to image a ridge on the greater trochanter, and a customized jig placed on the femoral condyles as reference points. An inclinometer attached to the customized jig allowed quantification of the degree of femoral torsion. Measurements taken with this protocol had excellent intra- and inter-rater reliability (ICC2,1 = 0.98 and 0.97, respectively). This method of measuring femoral torsion also permitted measurement of femoral torsion with a high degree of accuracy. This method is applicable to the research setting and, with minor adjustments, will be applicable to the clinical setting.

  4. NDE detectability of fatigue type cracks in high strength alloys

    NASA Technical Reports Server (NTRS)

    Christner, B. K.; Rummel, W. D.

    1983-01-01

    Specimens suitable for investigating the reliability of production nondestructive evaluation (NDE) to detect tightly closed fatigue cracks in high strength alloys representative of those materials used in spacecraft engine/booster construction were produced. Inconel 718 was selected as representative of nickel base alloys and Haynes 188 was selected as representative of cobalt base alloys used in this application. Cleaning procedures were developed to insure the reusability of the test specimens and a flaw detection reliability assessment of the fluorescent penetrant inspection method was performed using the test specimens produced to characterize their use for future reliability assessments and to provide additional NDE flaw detection reliability data for high strength alloys. The statistical analysis of the fluorescent penetrant inspection data was performed to determine the detection reliabilities for each inspection at a 90% probability/95% confidence level.

  5. The accuracy of ultrasound for measurement of mobile- bearing motion.

    PubMed

    Aigner, Christian; Radl, Roman; Pechmann, Michael; Rehak, Peter; Stacher, Rudolf; Windhager, Reinhard

    2004-04-01

    After anterior cruciate ligament-sacrificing total knee replacement, mobile bearings sometimes have paradoxic movement but the implications of such movement on function, wear, and implant survival are not known. To study this potential problem accurate, reliable, and widely available inexpensive tools for in vivo mobile-bearing motion analyses are needed. We developed a method using an 8-MHz ultrasound to analyze mobile-bearing motion and ascertained accuracy, precision, and reliability compared with plain and standard digital radiographs. The anterior rim of the mobile bearing was the target for all methods. The radiographs were taken in a horizontal plane at neutral rotation and incremental external and internal rotations. Five investigators examined four positions of the mobile bearing with all three methods. The accuracy and precision were: ultrasound, 0.7 mm and 0.2 mm; digital radiograph, 0.4 mm and 0.2 mm; and plain radiographs, 0.7 mm and 0.3 mm. The interrater and intrarater reliability ranged between 0.3 to 0.4 mm and 0.1 to 0.2 mm, respectively. The difference between the methods was not significant for neutral rotation but ultrasound was significantly more accurate than any one degree of rotation or higher. Ultrasound of 8 MHz provides an accuracy and reliability that is suitable for evaluation of in vivo meniscal bearing motion. Whether this method or others are sufficiently accurate to detect motion leading to abnormal wear is not known.

  6. An empirical study of flight control software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.

  7. NERF - A Computer Program for the Numerical Evaluation of Reliability Functions - Reliability Modelling, Numerical Methods and Program Documentation,

    DTIC Science & Technology

    1983-09-01

    gives the adaptive procedure the desirabl, property of providLng a self indication of possible failure. Let In(ab) denote a numerical estimate of I(ab...opertor’s response to the prompt stored in A. This respose is checked and INTEST set true if ’YES’, ’Y’ or ’T’ has been entered. INTEST is set false

  8. Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method

    NASA Astrophysics Data System (ADS)

    Adam, Gh.; Adam, S.

    2001-04-01

    The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.

  9. Synchronous Control Method and Realization of Automated Pharmacy Elevator

    NASA Astrophysics Data System (ADS)

    Liu, Xiang-Quan

    Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.

  10. Climate Change Impacts at Department of Defense Installations

    DTIC Science & Technology

    2017-06-16

    locations. The ease of use of this method and its flexibility have led to a wide variety of applications for assessing impacts of climate change 4...versions of these statistical methods to provide the basis for regional climate assessments for various states, regions, and government agencies...averaging (REA) method proposed by Giorgi and Mearns (2002). This method assigns reliability classifications for the multi-model ensemble simulation by

  11. Contamination-Free Manufacturing: Tool Component Qualification, Verification and Correlation with Wafers

    NASA Astrophysics Data System (ADS)

    Tan, Samantha H.; Chen, Ning; Liu, Shi; Wang, Kefei

    2003-09-01

    As part of the semiconductor industry "contamination-free manufacturing" effort, significant emphasis has been placed on reducing potential sources of contamination from process equipment and process equipment components. Process tools contain process chambers and components that are exposed to the process environment or process chemistry and in some cases are in direct contact with production wafers. Any contamination from these sources must be controlled or eliminated in order to maintain high process yields, device performance, and device reliability. This paper discusses new nondestructive analytical methods for quantitative measurement of the cleanliness of metal, quartz, polysilicon and ceramic components that are used in process equipment tools. The goal of these new procedures is to measure the effectiveness of cleaning procedures and to verify whether a tool component part is sufficiently clean for installation and subsequent routine use in the manufacturing line. These procedures provide a reliable "qualification method" for tool component certification and also provide a routine quality control method for reliable operation of cleaning facilities. Cost advantages to wafer manufacturing include higher yields due to improved process cleanliness and elimination of yield loss and downtime resulting from the installation of "bad" components in process tools. We also discuss a representative example of wafer contamination having been linked to a specific process tool component.

  12. Optical detection of metastatic cancer cells using a scanned laser pico-projection system

    NASA Astrophysics Data System (ADS)

    Huang, Chih-Ling; Chiu, Wen-Tai; Lo, Yu-Lung; Chuang, Chin-Ho; Chen, Yu-Bin; Chang, Shu-Jing; Ke, Tung-Ting; Cheng, Hung-Chi; Wu, Hua-Lin

    2015-03-01

    Metastasis is responsible for 90% of all cancer-related deaths in humans. As a result, reliable techniques for detecting metastatic cells are urgently required. Although various techniques have been proposed for metastasis detection, they are generally capable of detecting metastatic cells only once migration has already occurred. Accordingly, the present study proposes an optical method for physical characterization of metastatic cancer cells using a scanned laser pico-projection system (SLPP). The validity of the proposed method is demonstrated using five pairs of cancer cell lines and two pairs of non-cancer cell lines treated by IPTG induction in order to mimic normal cells with an overexpression of oncogene. The results show that for all of the considered cell lines, the SLPP speckle contrast of the high-metastatic cells is significantly higher than that of the low-metastatic cells. As a result, the speckle contrast measurement provides a reliable means of distinguishing quantitatively between low- and high-metastatic cells of the same origin. Compared to existing metastasis detection methods, the proposed SLPP approach has many advantages, including a higher throughput, a lower cost, a larger sample size and a more reliable diagnostic performance. As a result, it provides a highly promising solution for physical characterization of metastatic cancer cells in vitro.

  13. Probabilistic fracture finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-01-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  14. Probabilistic fracture finite elements

    NASA Astrophysics Data System (ADS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-05-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  15. Combination of uncertainty theories and decision-aiding methods for natural risk management in a context of imperfect information

    NASA Astrophysics Data System (ADS)

    Tacnet, Jean-Marc; Dupouy, Guillaume; Carladous, Simon; Dezert, Jean; Batton-Hubert, Mireille

    2017-04-01

    In mountain areas, natural phenomena such as snow avalanches, debris-flows and rock-falls, put people and objects at risk with sometimes dramatic consequences. Risk is classically considered as a combination of hazard, the combination of the intensity and frequency of the phenomenon, and vulnerability which corresponds to the consequences of the phenomenon on exposed people and material assets. Risk management consists in identifying the risk level as well as choosing the best strategies for risk prevention, i.e. mitigation. In the context of natural phenomena in mountainous areas, technical and scientific knowledge is often lacking. Risk management decisions are therefore based on imperfect information. This information comes from more or less reliable sources ranging from historical data, expert assessments, numerical simulations etc. Finally, risk management decisions are the result of complex knowledge management and reasoning processes. Tracing the information and propagating information quality from data acquisition to decisions are therefore important steps in the decision-making process. One major goal today is therefore to assist decision-making while considering the availability, quality and reliability of information content and sources. A global integrated framework is proposed to improve the risk management process in a context of information imperfection provided by more or less reliable sources: uncertainty as well as imprecision, inconsistency and incompleteness are considered. Several methods are used and associated in an original way: sequential decision context description, development of specific multi-criteria decision-making methods, imperfection propagation in numerical modeling and information fusion. This framework not only assists in decision-making but also traces the process and evaluates the impact of information quality on decision-making. We focus and present two main developments. The first one relates to uncertainty and imprecision propagation in numerical modeling using both classical Monte-Carlo probabilistic approach and also so-called Hybrid approach using possibility theory. Second approach deals with new multi-criteria decision-making methods which consider information imperfection, source reliability, importance and conflict, using fuzzy sets as well as possibility and belief function theories. Implemented methods consider information imperfection propagation and information fusion in total aggregation methods such as AHP (Saaty, 1980) or partial aggregation methods such as the Electre outranking method (see Soft Electre Tri ) or decisions in certain but also risky or uncertain contexts (see new COWA-ER and FOWA-ER- Cautious and Fuzzy Ordered Weighted Averaging-Evidential Reasoning). For example, the ER-MCDA methodology considers expert assessment as a multi-criteria decision process based on imperfect information provided by more or less heterogeneous, reliable and conflicting sources: it mixes AHP, fuzzy sets theory, possibility theory and belief function theory using DSmT (Dezert-Smarandache Theory) framework which provides powerful fusion rules.

  16. ImageJ: A Free, Easy, and Reliable Method to Measure Leg Ulcers Using Digital Pictures.

    PubMed

    Aragón-Sánchez, Javier; Quintana-Marrero, Yurena; Aragón-Hernández, Cristina; Hernández-Herero, María José

    2017-12-01

    Wound measurement to document the healing course of chronic leg ulcers has an important role in the management of these patients. Digital cameras in smartphones are readily available and easy to use, and taking pictures of wounds is becoming a routine in specialized departments. Analyzing digital pictures with appropriate software provides clinicians a quick, clean, and easy-to-use tool for measuring wound area. A set of 25 digital pictures of plain foot and leg ulcers was the basis of this study. Photographs were taken placing a ruler next to the wound in parallel with the healthy skin with the iPhone 6S (Apple Inc, Cupertino, CA), which has a camera of 12 megapixels using the flash. The digital photographs were visualized with ImageJ 1.45s freeware (National Institutes of Health, Rockville, MD; http://imagej.net/ImageJ ). Wound area measurement was carried out by 4 raters: head of the department, wound care nurse, physician, and medical student. We assessed intra- and interrater reliability using the interclass correlation coefficient. To determine intraobserver reliability, 2 of the raters repeated the measurement of the set 1 week after the first reading. The interrater model displayed an interclass correlation coefficient of 0.99 with 95% confidence interval of 0.999 to 1.000, showing excellent reliability. The intrarater model of both examiners showed excellent reliability. In conclusion, analyzing digital images of leg ulcers with ImageJ estimates wound area with excellent reliability. This method provides a free, rapid, and accurate way to measure wounds and could routinely be used to document wound healing in daily clinical practice.

  17. Digital assessment of the fetal alcohol syndrome facial phenotype: reliability and agreement study.

    PubMed

    Tsang, Tracey W; Laing-Aiken, Zoe; Latimer, Jane; Fitzpatrick, James; Oscar, June; Carter, Maureen; Elliott, Elizabeth J

    2017-01-01

    To examine the three facial features of fetal alcohol syndrome (FAS) in a cohort of Australian Aboriginal children from two-dimensional digital facial photographs to: (1) assess intrarater and inter-rater reliability; (2) identify the racial norms with the best fit for this population; and (3) assess agreement with clinician direct measures. Photographs and clinical data for 106 Aboriginal children (aged 7.4-9.6 years) were sourced from the Lililwan Project . Fifty-eight per cent had a confirmed prenatal alcohol exposure and 13 (12%) met the Canadian 2005 criteria for FAS/partial FAS. Photographs were analysed using the FAS Facial Photographic Analysis Software to generate the mean PFL three-point ABC-Score, five-point lip and philtrum ranks and four-point face rank in accordance with the 4-Digit Diagnostic Code. Intrarater and inter-rater reliability of digital ratings was examined in two assessors. Caucasian or African American racial norms for PFL and lip thickness were assessed for best fit; and agreement between digital and direct measurement methods was assessed. Reliability of digital measures was substantial within (kappa: 0.70-1.00) and between assessors (kappa: 0.64-0.89). Clinician and digital ratings showed moderate agreement (kappa: 0.47-0.58). Caucasian PFL norms and the African American Lip-Philtrum Guide 2 provided the best fit for this cohort. In an Aboriginal cohort with a high rate of FAS, assessment of facial dysmorphology using digital methods showed substantial inter- and intrarater reliability. Digital measurement of features has high reliability and until data are available from a larger population of Aboriginal children, the African American Lip-Philtrum Guide 2 and Caucasian (Strömland) PFL norms provide the best fit for Australian Aboriginal children.

  18. Children and their parents assessing the doctor-patient interaction: a rating system for doctors' communication skills.

    PubMed

    Crossley, Jim; Eiser, Christine; Davies, Helena A

    2005-08-01

    Only a patient and his or her family can judge many of the most important aspects of the doctor-patient interaction. This study evaluates the feasibility and reliability of children and their families assessing the quality of paediatricians' interactions using a rating instrument developed specifically for this purpose. A reliability analysis using generalisability theory on the ratings from 352 doctor-patient interactions across different speciality clinics. Ratings were normally distributed. They were highest for 'overall' performance, and lowest for giving time to discuss the families' agenda. An appropriate sample of adults' ratings provided a reliable score (G = 0.7 with 15 raters), but children's ratings were too idiosyncratic to be reproducible (G = 0.36 with 15 raters). CONCLUSIONS AND FURTHER WORK: Accompanying adults can provide reliable ratings of doctors' interactions with children. Because an adult is usually present at the consultation their ratings provide a highly feasible and authentic approach. Sampling doctors' interactions from different clinics and with patients of both genders provides a universal picture of performance. The method is ideal to measure performance for in-training assessment or revalidation. Further work is in progress to evaluate the educational impact of feeding ratings back to the doctors being assessed, and their use in a range of clinical contexts.

  19. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  20. Prognostics-based qualification of high-power white LEDs using Lévy process approach

    NASA Astrophysics Data System (ADS)

    Yung, Kam-Chuen; Sun, Bo; Jiang, Xiaopeng

    2017-01-01

    Due to their versatility in a variety of applications and the growing market demand, high-power white light-emitting diodes (LEDs) have attracted considerable attention. Reliability qualification testing is an essential part of the product development process to ensure the reliability of a new LED product before its release. However, the widely used IES-TM-21 method does not provide comprehensive reliability information. For more accurate and effective qualification, this paper presents a novel method based on prognostics techniques. Prognostics is an engineering technology predicting the future reliability or determining the remaining useful lifetime (RUL) of a product by assessing the extent of deviation or degradation from its expected normal operating conditions. A Lévy subordinator of a mixed Gamma and compound Poisson process is used to describe the actual degradation process of LEDs characterized by random sporadic small jumps of degradation degree, and the reliability function is derived for qualification with different distribution forms of jump sizes. The IES LM-80 test results reported by different LED vendors are used to develop and validate the qualification methodology. This study will be helpful for LED manufacturers to reduce the total test time and cost required to qualify the reliability of an LED product.

  1. The quadrant method measuring four points is as a reliable and accurate as the quadrant method in the evaluation after anatomical double-bundle ACL reconstruction.

    PubMed

    Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro

    2017-11-20

    The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for evaluating femoral tunnel position after ACL reconstruction. Case series, Level IV.

  2. Identification of a practical and reliable method for the evaluation of litter moisture in turkey production.

    PubMed

    Vinco, L J; Giacomelli, S; Campana, L; Chiari, M; Vitale, N; Lombardi, G; Veldkamp, T; Hocking, P M

    2018-02-01

    1. An experiment was conducted to compare 5 different methods for the evaluation of litter moisture. 2. For litter collection and assessment, 55 farms were selected, one shed from each farm was inspected and 9 points were identified within each shed. 3. For each device, used for the evaluation of litter moisture, mean and standard deviation of wetness measures per collection point were assessed. 4. The reliability and overall consistency between the 5 instruments used to measure wetness were high (α = 0.72). 5. Measurement of three out of the 9 collection points were sufficient to provide a reliable assessment of litter moisture throughout the shed. 6. Based on the direct correlation between litter moisture and footpad lesions, litter moisture measurement can be used as a resource based on-farm animal welfare indicator. 7. Among the 5 methods analysed, visual scoring is the most simple and practical, and therefore the best candidate to be used on-farm for animal welfare assessment.

  3. Methodology to Improve Design of Accelerated Life Tests in Civil Engineering Projects

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods. PMID:25111800

  4. Ballistic Puncture Self-Healing Polymeric Materials

    NASA Technical Reports Server (NTRS)

    Gordon, Keith L.; Siochi, Emilie J.; Yost, William T.; Bogert, Phil B.; Howell, Patricia A.; Cramer, K. Elliott; Burke, Eric R.

    2017-01-01

    Space exploration launch costs on the order of $10,000 per pound provide an incentive to seek ways to reduce structural mass while maintaining structural function to assure safety and reliability. Damage-tolerant structural systems provide a route to avoiding weight penalty while enhancing vehicle safety and reliability. Self-healing polymers capable of spontaneous puncture repair show promise to mitigate potentially catastrophic damage from events such as micrometeoroid penetration. Effective self-repair requires these materials to quickly heal following projectile penetration while retaining some structural function during the healing processes. Although there are materials known to possess this capability, they are typically not considered for structural applications. Current efforts use inexpensive experimental methods to inflict damage, after which analytical procedures are identified to verify that function is restored. Two candidate self-healing polymer materials for structural engineering systems are used to test these experimental methods.

  5. On the Discovery of Evolving Truth

    PubMed Central

    Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei

    2015-01-01

    In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502

  6. Development Of Methodologies Using PhabrOmeter For Fabric Drape Evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Chengwei

    Evaluation of fabric drape is important for textile industry as it reveals the aesthetic and functionality of the cloth and apparel. Although many fabric drape measuring methods have been developed for several decades, they are falling behind the need for fast product development by the industry. To meet the requirement of industries, it is necessary to develop an effective and reliable method to evaluate fabric drape. The purpose of the present study is to determine if PhabrOmeter can be applied to fabric drape evaluation. PhabrOmeter is a fabric sensory performance evaluating instrument which is developed to provide fast and reliable quality testing results. This study was sought to determine the relationship between fabric drape and other fabric attributes. In addition, a series of conventional methods including AATCC standards, ASTM standards and ISO standards were used to characterize the fabric samples. All the data were compared and analyzed with linear correlation method. The results indicate that PhabrOmeter is reliable and effective instrument for fabric drape evaluation. Besides, some effects including fabric structure, testing directions were considered to examine their impact on fabric drape.

  7. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  8. Fair and Just Culture, Team Behavior, and Leadership Engagement: The Tools to Achieve High Reliability

    PubMed Central

    Frankel, Allan S; Leonard, Michael W; Denham, Charles R

    2006-01-01

    Background Disparate health care provider attitudes about autonomy, teamwork, and administrative operations have added to the complexity of health care delivery and are a central factor in medicine's unacceptably high rate of errors. Other industries have improved their reliability by applying innovative concepts to interpersonal relationships and administrative hierarchical structures (Chandler 1962). In the last 10 years the science of patient safety has become more sophisticated, with practical concepts identified and tested to improve the safety and reliability of care. Objective Three initiatives stand out as worthy regarding interpersonal relationships and the application of provider concerns to shape operational change: The development and implementation of Fair and Just Culture principles, the broad use of Teamwork Training and Communication, and tools like WalkRounds that promote the alignment of leadership and frontline provider perspectives through effective use of adverse event data and provider comments. Methods Fair and Just Culture, Teamwork Training, and WalkRounds are described, and implementation examples provided. The argument is made that they must be systematically and consistently implemented in an integrated fashion. Conclusions There are excellent examples of institutions applying Just Culture principles, Teamwork Training, and Leadership WalkRounds—but to date, they have not been comprehensively instituted in health care organizations in a cohesive and interdependent manner. To achieve reliability, organizations need to begin thinking about the relationship between these efforts and linking them conceptually. PMID:16898986

  9. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  10. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  11. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  12. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  13. Assessment of Lower Limb Muscle Strength and Power Using Hand-Held and Fixed Dynamometry: A Reliability and Validity Study

    PubMed Central

    Perraton, Luke G.; Bower, Kelly J.; Adair, Brooke; Pua, Yong-Hao; Williams, Gavin P.; McGaw, Rebekah

    2015-01-01

    Introduction Hand-held dynamometry (HHD) has never previously been used to examine isometric muscle power. Rate of force development (RFD) is often used for muscle power assessment, however no consensus currently exists on the most appropriate method of calculation. The aim of this study was to examine the reliability of different algorithms for RFD calculation and to examine the intra-rater, inter-rater, and inter-device reliability of HHD as well as the concurrent validity of HHD for the assessment of isometric lower limb muscle strength and power. Methods 30 healthy young adults (age: 23±5yrs, male: 15) were assessed on two sessions. Isometric muscle strength and power were measured using peak force and RFD respectively using two HHDs (Lafayette Model-01165 and Hoggan microFET2) and a criterion-reference KinCom dynamometer. Statistical analysis of reliability and validity comprised intraclass correlation coefficients (ICC), Pearson correlations, concordance correlations, standard error of measurement, and minimal detectable change. Results Comparison of RFD methods revealed that a peak 200ms moving window algorithm provided optimal reliability results. Intra-rater, inter-rater, and inter-device reliability analysis of peak force and RFD revealed mostly good to excellent reliability (coefficients ≥ 0.70) for all muscle groups. Concurrent validity analysis showed moderate to excellent relationships between HHD and fixed dynamometry for the hip and knee (ICCs ≥ 0.70) for both peak force and RFD, with mostly poor to good results shown for the ankle muscles (ICCs = 0.31–0.79). Conclusions Hand-held dynamometry has good to excellent reliability and validity for most measures of isometric lower limb strength and power in a healthy population, particularly for proximal muscle groups. To aid implementation we have created freely available software to extract these variables from data stored on the Lafayette device. Future research should examine the reliability and validity of these variables in clinical populations. PMID:26509265

  14. Stability-Aware Geographic Routing in Energy Harvesting Wireless Sensor Networks

    PubMed Central

    Hieu, Tran Dinh; Dung, Le The; Kim, Byung-Seo

    2016-01-01

    A new generation of wireless sensor networks that harvest energy from environmental sources such as solar, vibration, and thermoelectric to power sensor nodes is emerging to solve the problem of energy limitation. Based on the photo-voltaic model, this research proposes a stability-aware geographic routing for reliable data transmissions in energy-harvesting wireless sensor networks (EH-WSNs) to provide a reliable routes selection method and potentially achieve an unlimited network lifetime. Specifically, the influences of link quality, represented by the estimated packet reception rate, on network performance is investigated. Simulation results show that the proposed method outperforms an energy-harvesting-aware method in terms of energy consumption, the average number of hops, and the packet delivery ratio. PMID:27187414

  15. Methods for quantification of soil-transmitted helminths in environmental media: current techniques and recent advances

    PubMed Central

    Collender, Philip A.; Kirby, Amy E.; Addiss, David G.; Freeman, Matthew C.; Remais, Justin V.

    2015-01-01

    Limiting the environmental transmission of soil-transmitted helminths (STH), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost effective methods to detect and quantify STH in the environment. We review the state of the art of STH quantification in soil, biosolids, water, produce, and vegetation with respect to four major methodological issues: environmental sampling; recovery of STH from environmental matrices; quantification of recovered STH; and viability assessment of STH ova. We conclude that methods for sampling and recovering STH require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. PMID:26440788

  16. Measurement of Surface Interfacial Tension as a Function of Temperature Using Pendant Drop Images

    NASA Astrophysics Data System (ADS)

    Yakhshi-Tafti, Ehsan; Kumar, Ranganathan; Cho, Hyoung J.

    2011-10-01

    Accurate and reliable measurements of surface tension at the interface of immiscible phases are crucial to understanding various physico-chemical reactions taking place between those. Based on the pendant drop method, an optical (graphical)-numerical procedure was developed to determine surface tension and its dependency on the surrounding temperature. For modeling and experimental verification, chemically inert and thermally stable perfluorocarbon (PFC) oil and water was used. Starting with geometrical force balance, governing equations were derived to provide non-dimensional parameters which were later used to extract values for surface tension. Comparative study verified the accuracy and reliability of the proposed method.

  17. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  18. A New Enzyme-linked Sorbent Assay (ELSA) to Quantify Syncytiotrophoblast Extracellular Vesicles in Biological Fluids.

    PubMed

    Göhner, Claudia; Weber, Maja; Tannetta, Dionne S; Groten, Tanja; Plösch, Torsten; Faas, Marijke M; Scherjon, Sicco A; Schleußner, Ekkehard; Markert, Udo R; Fitzgerald, Justine S

    2015-06-01

    The pregnancy-associated disease preeclampsia is related to the release of syncytiotrophoblast extracellular vesicles (STBEV) by the placenta. To improve functional research on STBEV, reliable and specific methods are needed to quantify them. However, only a few quantification methods are available and accepted, though imperfect. For this purpose, we aimed to provide an enzyme-linked sorbent assay (ELSA) to quantify STBEV in fluid samples based on their microvesicle characteristics and placental origin. Ex vivo placenta perfusion provided standards and samples for the STBEV quantification. STBEV were captured by binding of extracellular phosphatidylserine to immobilized annexin V. The membranous human placental alkaline phosphatase on the STBEV surface catalyzed a colorimetric detection reaction. The described ELSA is a rapid and simple method to quantify STBEV in diverse liquid samples, such as blood or perfusion suspension. The reliability of the ELSA was proven by comparison with nanoparticle tracking analysis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Creep-rupture reliability analysis

    NASA Technical Reports Server (NTRS)

    Peralta-Duran, A.; Wirsching, P. H.

    1984-01-01

    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  20. Managing Complex IT Security Processes with Value Based Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2009-01-01

    Current trends indicate that IT security measures will need to greatly expand to counter the ever increasingly sophisticated, well-funded and/or economically motivated threat space. Traditional risk management approaches provide an effective method for guiding courses of action for assessment, and mitigation investments. However, such approaches no matter how popular demand very detailed knowledge about the IT security domain and the enterprise/cyber architectural context. Typically, the critical nature and/or high stakes require careful consideration and adaptation of a balanced approach that provides reliable and consistent methods for rating vulnerabilities. As reported in earlier works, the Cyberspace Security Econometrics System provides amore » comprehensive measure of reliability, security and safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. This paper advocates a dependability measure that acknowledges the aggregate structure of complex system specifications, and accounts for variations by stakeholder, by specification components, and by verification and validation impact.« less

  1. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  2. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement through focused, effective physician performance improvement is well advised to consider the value of incorporating reliability adjustments into their performance measurement system.

  3. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  4. Improving applied roughness measurement of involute helical gears

    NASA Astrophysics Data System (ADS)

    Koulin, G.; Zhang, J.; Frazer, R. C.; Wilson, S. J.; Shaw, B. A.

    2017-12-01

    With improving gear design and manufacturing technology, improvement in metrology is necessary to provide reliable feedback to the designer and manufacturer. A recommended gear roughness measurement method is applied to a micropitting contact fatigue test gear. The development of wear and micropitting is reliably characterised at the sub-micron roughness level. Changes to the features of the localised surface texture are revealed and are related to key gear meshing positions. The application of the recommended methodology is shown to provide informative feedback to the gear designer in reference to the fundamental gear coordinate system, which is used in gear performance simulations such as tooth contact analysis.

  5. Engineering Design Handbook. Development Guide for Reliability. Part Two. Design for Reliability

    DTIC Science & Technology

    1976-01-01

    Component failure rates, however, have been recorded by many sources as a function of use and environment. Some of these sources are listed in Refs. 13-17...other systems capable of creating an explosive reac- tion. The second category is fairly obvious and includes many variations on methods for providing...aboutthem. 4. Ability to detect signals ( including patterns) in high noise environments. 5. Ability to store large amounts of informa- tion for long

  6. Japanese version of the Dermatology Life Quality Index: validity and reliability in patients with acne.

    PubMed

    Takahashi, Natsuko; Suzukamo, Yoshimi; Nakamura, Motonobu; Miyachi, Yoshiki; Green, Joseph; Ohya, Yukihiro; Finlay, Andrew Y; Fukuhara, Shunichi

    2006-08-03

    Patient-reported quality of life is strongly affected by some dermatologic conditions. We developed a Japanese version of the Dermatology Life Quality Index (DLQI-J) and used psychometric methods to examine its validity and reliability. The Japanese version of the DLQI was created from the original (English) version, using a standard method. The DLQI-J was then completed by 197 people, to examine its validity and reliability. Some participants completed the DLQI-J a second time, 3 days later, to examine the reproducibility of their responses. In addition to the DLQI-J, the participants completed parts of the SF-36 and gave data on their demographic and clinical characteristics. Their physicians provided information on the location and clinical severity of the skin disease. The participants reported no difficulties in answering the DLQI-J items. Their mean age was 24.8 years, 77.2% were female, and 78.7% had acne vulgaris. The mean score of DLQI was 3.99(SD: 3.99). The responses were found to be reproducible and stable. Results of principal-component and factor analysis suggested that this scale measured one construct. The correlations of DLQI-J scores with sex or age were very poor, but those with SF-36 scores and with clinical severity were high. The DLQI-J provides valid and reliable data despite having only a small number of items.

  7. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  8. The Effect of Spoilers on the Enjoyment of Short Stories

    ERIC Educational Resources Information Center

    Levine, William H.; Betzner, Michelle; Autry, Kevin S.

    2016-01-01

    Recent research has provided evidence that the information provided before a story--a spoiler--may increase the enjoyment of that story, perhaps by increasing the processing fluency experienced during reading. In one experiment, we tested the reliability of these findings by closely replicating existing methods and the generality of these findings…

  9. Making literature reviews more reliable through application of lessons from systematic reviews.

    PubMed

    Haddaway, N R; Woodcock, P; Macura, B; Collins, A

    2015-12-01

    Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.

  10. Temporal similarity perfusion mapping: A standardized and model-free method for detecting perfusion deficits in stroke

    PubMed Central

    Song, Sunbin; Luby, Marie; Edwardson, Matthew A.; Brown, Tyler; Shah, Shreyansh; Cox, Robert W.; Saad, Ziad S.; Reynolds, Richard C.; Glen, Daniel R.; Cohen, Leonardo G.; Latour, Lawrence L.

    2017-01-01

    Introduction Interpretation of the extent of perfusion deficits in stroke MRI is highly dependent on the method used for analyzing the perfusion-weighted signal intensity time-series after gadolinium injection. In this study, we introduce a new model-free standardized method of temporal similarity perfusion (TSP) mapping for perfusion deficit detection and test its ability and reliability in acute ischemia. Materials and methods Forty patients with an ischemic stroke or transient ischemic attack were included. Two blinded readers compared real-time generated interactive maps and automatically generated TSP maps to traditional TTP/MTT maps for presence of perfusion deficits. Lesion volumes were compared for volumetric inter-rater reliability, spatial concordance between perfusion deficits and healthy tissue and contrast-to-noise ratio (CNR). Results Perfusion deficits were correctly detected in all patients with acute ischemia. Inter-rater reliability was higher for TSP when compared to TTP/MTT maps and there was a high similarity between the lesion volumes depicted on TSP and TTP/MTT (r(18) = 0.73). The Pearson's correlation between lesions calculated on TSP and traditional maps was high (r(18) = 0.73, p<0.0003), however the effective CNR was greater for TSP compared to TTP (352.3 vs 283.5, t(19) = 2.6, p<0.03.) and MTT (228.3, t(19) = 2.8, p<0.03). Discussion TSP maps provide a reliable and robust model-free method for accurate perfusion deficit detection and improve lesion delineation compared to traditional methods. This simple method is also computationally faster and more easily automated than model-based methods. This method can potentially improve the speed and accuracy in perfusion deficit detection for acute stroke treatment and clinical trial inclusion decision-making. PMID:28973000

  11. The ratio method: A new tool to study one-neutron halo nuclei

    DOE PAGES

    Capel, Pierre; Johnson, R. C.; Nunes, F. M.

    2013-10-02

    Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.

  12. Can real time location system technology (RTLS) provide useful estimates of time use by nursing personnel?

    PubMed

    Jones, Terry L; Schlegel, Cara

    2014-02-01

    Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.

  13. Development of a PCR-based assay for rapid and reliable identification of pathogenic Fusaria.

    PubMed

    Mishra, Prashant K; Fox, Roland T V; Culham, Alastair

    2003-01-28

    Identification of Fusarium species has always been difficult due to confusing phenotypic classification systems. We have developed a fluorescent-based polymerase chain reaction assay that allows for rapid and reliable identification of five toxigenic and pathogenic Fusarium species. The species includes Fusarium avenaceum, F. culmorum, F. equiseti, F. oxysporum and F. sambucinum. The method is based on the PCR amplification of species-specific DNA fragments using fluorescent oligonucleotide primers, which were designed based on sequence divergence within the internal transcribed spacer region of nuclear ribosomal DNA. Besides providing an accurate, reliable, and quick diagnosis of these Fusaria, another advantage with this method is that it reduces the potential for exposure to carcinogenic chemicals as it substitutes the use of fluorescent dyes in place of ethidium bromide. Apart from its multidisciplinary importance and usefulness, it also obviates the need for gel electrophoresis.

  14. Identification of Extracellular Segments by Mass Spectrometry Improves Topology Prediction of Transmembrane Proteins.

    PubMed

    Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E

    2017-02-13

    Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.

  15. Interrater Reliability and Discriminative Validity of the Structural Elements of the Ayres Sensory Integration® Fidelity Measure©

    PubMed Central

    Roley, Susanne Smith; Mailloux, Zoe; Parham, L. Diane; Koomar, Jane; Schaaf, Roseann C.; Van Jaarsveld, Annamarie; Cohn, Ellen

    2014-01-01

    This study examined the reliability and validity of the structural section of the Ayres Sensory Integration® Fidelity Measure© (ASIFM), which provides a method for monitoring the extent to which an intervention was implemented as conceptualized in studies of occupational therapy using sensory integration intervention methods (OT–SI). We examined the structural elements of the measure, including content of assessment reports, availability of specific equipment and adequate space, safety monitoring, and integration of communication with parents and other team members, such as collaborative goal setting with parents or family and teacher education, into the intervention program. Analysis of self-report ratings by 259 occupational therapists from 185 different facilities indicated that the structural section of the ASIFM has acceptable interrater reliability (r ≥ .82) and significantly differentiates between settings in which therapists reportedly do and do not practice OT–SI (p < .001). PMID:25184462

  16. Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.

    PubMed

    Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie

    2010-07-01

    Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Noncontact spirometry with a webcam

    NASA Astrophysics Data System (ADS)

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  18. Noncontact spirometry with a webcam.

    PubMed

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  19. Comprehensive classification test of scapular dyskinesis: A reliability study.

    PubMed

    Huang, Tsun-Shun; Huang, Han-Yi; Wang, Tyng-Guey; Tsai, Yung-Shen; Lin, Jiu-Jenq

    2015-06-01

    Assessment of scapular dyskinesis (SD) is of clinical interest, as SD is believed to be related to shoulder pathology. However, no clinical assessment with sufficient reliability to identify SD and provide treatment strategies is available. The purpose of this study was to investigate the reliability of the comprehensive SD classification method. Cross-sectional reliability study. Sixty subjects with unilateral shoulder pain were evaluated by two independent physiotherapists with a visual-based palpation method. SD was classified as single abnormal scapular pattern [inferior angle (pattern I), medial border (pattern II), superior border of scapula prominence or abnormal scapulohumeral rhythm (pattern III)], a mixture of the above abnormal scapular patterns, or normal pattern (pattern IV). The assessment of SD was evaluated as subjects performed bilateral arm raising/lowering movements with a weighted load in the scapular plane. Percentage of agreement and kappa coefficients were calculated to determine reliability. Agreement between the 2 independent physiotherapists was 83% (50/60, 6 subjects as pattern III and 44 subjects as pattern IV) in the raising phase and 68% (41/60, 5 subjects as pattern I, 12 subjects as pattern II, 12 subjects as pattern IV, 12 subjects as mixed patterns I and II) in the lowering phase. The kappa coefficients were 0.49-0.64. We concluded that the visual-based palpation classification method for SD had moderate to substantial inter-rater reliability. The appearance of different types of SD was more pronounced in the lowering phase than in the raising phase of arm movements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Insightful practice: a reliable measure for medical revalidation

    PubMed Central

    Guthrie, Bruce; Sullivan, Frank M; Mercer, Stewart W; Russell, Andrew; Bruce, David A

    2012-01-01

    Background Medical revalidation decisions need to be reliable if they are to reassure on the quality and safety of professional practice. This study tested an innovative method in which general practitioners (GPs) were assessed on their reflection and response to a set of externally specified feedback. Setting and participants 60 GPs and 12 GP appraisers in the Tayside region of Scotland, UK. Methods A feedback dataset was specified as (1) GP-specific data collected by GPs themselves (patient and colleague opinion; open book self-evaluated knowledge test; complaints) and (2) Externally collected practice-level data provided to GPs (clinical quality and prescribing safety). GPs' perceptions of whether the feedback covered UK General Medical Council specified attributes of a ‘good doctor’ were examined using a mapping exercise. GPs' professionalism was examined in terms of appraiser assessment of GPs' level of insightful practice, defined as: engagement with, insight into and appropriate action on feedback data. The reliability of assessment of insightful practice and subsequent recommendations on GPs' revalidation by face-to-face and anonymous assessors were investigated using Generalisability G-theory. Main outcome measures Coverage of General Medical Council attributes by specified feedback and reliability of assessor recommendations on doctors' suitability for revalidation. Results Face-to-face assessment proved unreliable. Anonymous global assessment by three appraisers of insightful practice was highly reliable (G=0.85), as were revalidation decisions using four anonymous assessors (G=0.83). Conclusions Unlike face-to-face appraisal, anonymous assessment of insightful practice offers a valid and reliable method to decide GP revalidation. Further validity studies are needed. PMID:22653078

  1. An accurate and reliable method of thermal data analysis in thermal imaging of the anterior knee for use in cryotherapy research.

    PubMed

    Selfe, James; Hardaker, Natalie; Thewlis, Dominic; Karki, Anna

    2006-12-01

    To develop an anatomic marker system (AMS) as an accurate, reliable method of thermal imaging data analysis, for use in cryotherapy research. Investigation of the accuracy of new thermal imaging technique. Hospital orthopedic outpatient department in England. Consecutive sample of 9 patients referred to anterior knee pain clinic. Not applicable. Thermally inert markers were placed at specific anatomic locations, defining an area over the anterior knee of patients with anterior knee pain. A baseline thermal image was taken. Patients underwent a 3-minute thermal washout of the affected knee. Thermal images were collected at a rate of 1 image per minute for a 20-minute re-warming period. A Matlab (version 7.0) program was written to digitize the marker positions and subsequently calculate the mean of the area over the anterior knee. Virtual markers were then defined as 15% distal from the proximal marker, 30% proximal from the distal markers, 15% lateral from the medial marker, and 15% medial from the lateral marker. The virtual markers formed an ellipse, which defined an area representative of the patella shape. Within the ellipse, the mean value of the full pixels determined the mean temperature of this region. Ten raters were recruited to use the program and interrater reliability was investigated. The intraclass correlation coefficient produced coefficients within acceptable bounds, ranging from .82 to .97, indicating adequate interrater reliability. The AMS provides an accurate, reliable method for thermal imaging data analysis and is a reliable tool with which to advance cryotherapy research.

  2. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  3. Fixed-Node Diffusion Quantum Monte Carlo Method on Dissociation Energies and Their Trends for R-X Bonds (R = Me, Et, i-Pr, t-Bu).

    PubMed

    Hou, Aiqiang; Zhou, Xiaojun; Wang, Ting; Wang, Fan

    2018-05-15

    Achieving both bond dissociation energies (BDEs) and their trends for the R-X bonds with R = Me, Et, i-Pr, and t-Bu reliably is nontrivial. Density functional theory (DFT) methods with traditional exchange-correlation functionals usually have large error on both the BDEs and their trends. The M06-2X functional gives rise to reliable BDEs, but the relative BDEs are determined not as accurately. More demanding approaches such as some double-hybrid functionals, for example, G4 and CCSD(T), are generally required to achieve the BDEs and their trends reliably. The fixed-node diffusion quantum Monte Carlo method (FN-DMC) is employed to calculated BDEs of these R-X bonds with X = H, CH 3 , OCH 3 , OH, and F in this work. The single Slater-Jastrow wave function is adopted as trial wave function, and pseudopotentials (PPs) developed for quantum Monte Carlo calculations are chosen. Error of these PPs is modest in wave function methods, while it is more pronounced in DFT calculations. Our results show that accuracy of BDEs with FN-DMC is similar to that of M06-2X and G4, and trends in BDEs are calculated more reliably than M06-2X. Both BDEs and trends in BDEs of these bonds are reproduced reasonably with FN-DMC. FN-DMC using PPs can thus be applied to BDEs and their trends of similar chemical bonds in larger molecules reliably and provide valuable information on properties of these molecules.

  4. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  5. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy

    PubMed Central

    Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.

    2015-01-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614

  6. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy.

    PubMed

    Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T

    2013-09-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.

  7. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  8. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  9. Hawking radiation and covariant anomalies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Rabin; Kulkarni, Shailesh

    2008-01-15

    Generalizing the method of Wilczek and collaborators we provide a derivation of Hawking radiation from charged black holes using only covariant gauge and gravitational anomalies. The reliability and universality of the anomaly cancellation approach to Hawking radiation is also discussed.

  10. The flaws and human harms of animal experimentation.

    PubMed

    Akhtar, Aysha

    2015-10-01

    Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.

  11. Gauging the gaps in student problem-solving skills: assessment of individual and group use of problem-solving strategies using online discussions.

    PubMed

    Anderson, William L; Mitchell, Steven M; Osgood, Marcy P

    2008-01-01

    For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted help to groups and to individual students. This method of assessing performance has a high interrater reliability, and senior students, with training, can serve as reliable graders. We have been able to measure improvements in many students' problem-solving strategies, but, not unexpectedly, there is a population of students who consistently apply the same failing strategy when there is no faculty intervention. This new methodology provides an effective tool to direct faculty to constructively intercede in this area of student development.

  12. DRS: Derivational Reasoning System

    NASA Technical Reports Server (NTRS)

    Bose, Bhaskar

    1995-01-01

    The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.

  13. A novel technique to monitor thermal discharges using thermal infrared imaging.

    PubMed

    Muthulakshmi, A L; Natesan, Usha; Ferrer, Vincent A; Deepthi, K; Venugopalan, V P; Narasimhan, S V

    2013-09-01

    Coastal temperature is an important indicator of water quality, particularly in regions where delicate ecosystems sensitive to water temperature are present. Remote sensing methods are highly reliable for assessing the thermal dispersion. The plume dispersion from the thermal outfall of the nuclear power plant at Kalpakkam, on the southeast coast of India, was investigated from March to December 2011 using thermal infrared images along with field measurements. The absolute temperature as provided by the thermal infrared (TIR) images is used in the Arc GIS environment for generating a spatial pattern of the plume movement. Good correlation of the temperature measured by the TIR camera with the field data (r(2) = 0.89) make it a reliable method for the thermal monitoring of the power plant effluents. The study portrays that the remote sensing technique provides an effective means of monitoring the thermal distribution pattern in coastal waters.

  14. A simple video-based timing system for on-ice team testing in ice hockey: a technical report.

    PubMed

    Larson, David P; Noonan, Benjamin C

    2014-09-01

    The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.

  15. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  16. Quantifying the Diversity and Similarity of Surgical Procedures Among Hospitals and Anesthesia Providers.

    PubMed

    Dexter, Franklin; Ledolter, Johannes; Hindman, Bradley J

    2016-01-01

    In this Statistical Grand Rounds, we review methods for the analysis of the diversity of procedures among hospitals, the activities among anesthesia providers, etc. We apply multiple methods and consider their relative reliability and usefulness for perioperative applications, including calculations of SEs. We also review methods for comparing the similarity of procedures among hospitals, activities among anesthesia providers, etc. We again apply multiple methods and consider their relative reliability and usefulness for perioperative applications. The applications include strategic analyses (e.g., hospital marketing) and human resource analytics (e.g., comparisons among providers). Measures of diversity of procedures and activities (e.g., Herfindahl and Gini-Simpson index) are used for quantification of each facility (hospital) or anesthesia provider, one at a time. Diversity can be thought of as a summary measure. Thus, if the diversity of procedures for 48 hospitals is studied, the diversity (and its SE) is being calculated for each hospital. Likewise, the effective numbers of common procedures at each hospital can be calculated (e.g., by using the exponential of the Shannon index). Measures of similarity are pairwise assessments. Thus, if quantifying the similarity of procedures among cases with a break or handoff versus cases without a break or handoff, a similarity index represents a correlation coefficient. There are several different measures of similarity, and we compare their features and applicability for perioperative data. We rely extensively on sensitivity analyses to interpret observed values of the similarity index.

  17. Validity and reliability assessment of a peer evaluation method in team-based learning classes.

    PubMed

    Yoon, Hyun Bae; Park, Wan Beom; Myung, Sun-Jung; Moon, Sang Hui; Park, Jun-Bean

    2018-03-01

    Team-based learning (TBL) is increasingly employed in medical education because of its potential to promote active group learning. In TBL, learners are usually asked to assess the contributions of peers within their group to ensure accountability. The purpose of this study is to assess the validity and reliability of a peer evaluation instrument that was used in TBL classes in a single medical school. A total of 141 students were divided into 18 groups in 11 TBL classes. The students were asked to evaluate their peers in the group based on evaluation criteria that were provided to them. We analyzed the comments that were written for the highest and lowest achievers to assess the validity of the peer evaluation instrument. The reliability of the instrument was assessed by examining the agreement among peer ratings within each group of students via intraclass correlation coefficient (ICC) analysis. Most of the students provided reasonable and understandable comments for the high and low achievers within their group, and most of those comments were compatible with the evaluation criteria. The average ICC of each group ranged from 0.390 to 0.863, and the overall average was 0.659. There was no significant difference in inter-rater reliability according to the number of members in the group or the timing of the evaluation within the course. The peer evaluation instrument that was used in the TBL classes was valid and reliable. Providing evaluation criteria and rules seemed to improve the validity and reliability of the instrument.

  18. Intersession reliability of fMRI activation for heat pain and motor tasks

    PubMed Central

    Quiton, Raimi L.; Keaser, Michael L.; Zhuo, Jiachen; Gullapalli, Rao P.; Greenspan, Joel D.

    2014-01-01

    As the practice of conducting longitudinal fMRI studies to assess mechanisms of pain-reducing interventions becomes more common, there is a great need to assess the test–retest reliability of the pain-related BOLD fMRI signal across repeated sessions. This study quantitatively evaluated the reliability of heat pain-related BOLD fMRI brain responses in healthy volunteers across 3 sessions conducted on separate days using two measures: (1) intraclass correlation coefficients (ICC) calculated based on signal amplitude and (2) spatial overlap. The ICC analysis of pain-related BOLD fMRI responses showed fair-to-moderate intersession reliability in brain areas regarded as part of the cortical pain network. Areas with the highest intersession reliability based on the ICC analysis included the anterior midcingulate cortex, anterior insula, and second somatosensory cortex. Areas with the lowest intersession reliability based on the ICC analysis also showed low spatial reliability; these regions included pregenual anterior cingulate cortex, primary somatosensory cortex, and posterior insula. Thus, this study found regional differences in pain-related BOLD fMRI response reliability, which may provide useful information to guide longitudinal pain studies. A simple motor task (finger-thumb opposition) was performed by the same subjects in the same sessions as the painful heat stimuli were delivered. Intersession reliability of fMRI activation in cortical motor areas was comparable to previously published findings for both spatial overlap and ICC measures, providing support for the validity of the analytical approach used to assess intersession reliability of pain-related fMRI activation. A secondary finding of this study is that the use of standard ICC alone as a measure of reliability may not be sufficient, as the underlying variance structure of an fMRI dataset can result in inappropriately high ICC values; a method to eliminate these false positive results was used in this study and is recommended for future studies of test–retest reliability. PMID:25161897

  19. Consistency Analysis and Data Consultation of Gas System of Gas-Electricity Network of Latvia

    NASA Astrophysics Data System (ADS)

    Zemite, L.; Kutjuns, A.; Bode, I.; Kunickis, M.; Zeltins, N.

    2018-02-01

    In the present research, the main critical points of gas transmission and storage system of Latvia have been determined to ensure secure and reliable gas supply among the Baltic States to fulfil the core objectives of the EU energy policies. Technical data of critical points of the gas transmission and storage system of Latvia have been collected and analysed with the SWOT method and solutions have been provided to increase the reliability of the regional natural gas system.

  20. Two-dimensional digital photography for child body posture evaluation: standardized technique, reliable parameters and normative data for age 7-10 years.

    PubMed

    Stolinski, L; Kozinoga, M; Czaprowski, D; Tyrakowski, M; Cerny, P; Suzuki, N; Kotwicki, T

    2017-01-01

    Digital photogrammetry provides measurements of body angles or distances which allow for quantitative posture assessment with or without the use of external markers. It is becoming an increasingly popular tool for the assessment of the musculoskeletal system. The aim of this paper is to present a structured method for the analysis of posture and its changes using a standardized digital photography technique. The purpose of the study was twofold. The first one comprised 91 children (44 girls and 47 boys) aged 7-10 (8.2 ± 1.0), i.e., students of primary school, and its aim was to develop the photographic method, choose the quantitative parameters, and determine the intraobserver reliability (repeatability) along with the interobserver reliability (reproducibility) measurements in sagittal plane using digital photography, as well as to compare the Rippstein plurimeter and digital photography measurements. The second one involved 7782 children (3804 girls, 3978 boys) aged 7-10 (8.4 ± 0.5), who underwent digital photography postural screening. The methods consisted in measuring and calculating selected parameters, establishing the normal ranges of photographic parameters, presenting percentile charts, as well as noticing common pitfalls and possible sources of errors in digital photography. A standardized procedure for the photographic evaluation of child body posture was presented. The photographic measurements revealed very good intra- and inter-rater reliability regarding the five sagittal parameters and good reliability performed against Rippstein plurimeter measurements. The parameters displayed insignificant variability over time. Normative data were calculated based on photographic assessment, while the percentile charts were provided to serve as reference values. The technical errors observed during photogrammetry are carefully discussed in this article. Technical developments are allowed for the regular use of digital photogrammetry in body posture assessment. Specific child positioning (described above) enables us to avoid incidentally modified posture. Image registration is simple, quick, harmless, and cost-effective. The semi-automatic image analysis, together with the normal values and percentile charts, makes the technique reliable in terms of child's posture documentation and corrective therapy effects' monitoring.

  1. Development of Theoretical and Computational Methods for Single-Source Bathymetric Data

    DTIC Science & Technology

    2016-09-15

    Methods for Single-Source N00014-16-1-2035 Bathymetric Data Sb. GRANT NUMBER 11893686 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...A method is outlined for fusing the information inherent in such source documents, at different scales, into a single picture for the marine...algorithm reliability, which reflects the degree of inconsistency of the source documents, is also provided. A conceptual outline of the method , and a

  2. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  3. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  4. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  5. Methods for Quantification of Soil-Transmitted Helminths in Environmental Media: Current Techniques and Recent Advances.

    PubMed

    Collender, Philip A; Kirby, Amy E; Addiss, David G; Freeman, Matthew C; Remais, Justin V

    2015-12-01

    Limiting the environmental transmission of soil-transmitted helminths (STHs), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost-effective methods to detect and quantify STHs in the environment. We review the state-of-the-art of STH quantification in soil, biosolids, water, produce, and vegetation with regard to four major methodological issues: environmental sampling; recovery of STHs from environmental matrices; quantification of recovered STHs; and viability assessment of STH ova. We conclude that methods for sampling and recovering STHs require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  7. Synthesizing cognition in neuromorphic electronic systems

    PubMed Central

    Neftci, Emre; Binas, Jonathan; Rutishauser, Ueli; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney J.

    2013-01-01

    The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina. PMID:23878215

  8. [Construction of a psychological aging scale for healthy people].

    PubMed

    Lin, Fei; Long, Yao; Zeng, Ni; Wu, Lei; Huang, Helang

    2017-04-28

    To construct a psychological aging scale, and to provide a tool and indexes for scientific evaluation on aging.
 Methods: The age-related psychological items were collected through literature screening and expert interview. The importance, feasibilityand the degree of authority for the psychological index system were graded by two rounds of Delphi method. Using analytic hierarchy process, the weight of dimensions and items were determined. The analysis for internal consistency reliability, correlation and exploratory factor was performed to evaluate the reliability and validity of the scales.
 Results: By two rounds of Delphi method, 17 experts offered the results as follows: the coefficient of expert authorities was 0.88±0.06, the coordination coefficients for the importance and feasibility in second round were 0.456 (P<0.01) and 0.666 (P<0.01), respectively. The consistency was good. The psychological aging scale for healthy people included 4 dimensions as follows: cognitive function, emotion, personality and motivation. The weight coefficients for the 4 dimensions were 0.338, 0.250, 0.166 and 0.258, respectively. The Cronbach's α coefficient for the scale was 0.822, the reliability was 0.817, the content validity index (CVI) was 0.847, and the cumulative contribution rate for the 5 factors was51.42%.
 Conclusion: The psychological aging scale is satisfied, which can provide reference for the evaluation for aging. The indicators were representative and well-recognized.

  9. Program Retrieval/Dissemination: A Solid State Random Access System.

    ERIC Educational Resources Information Center

    Weeks, Walter O., Jr.

    The trend toward greater flexibility in educational methods has led to a need for better and more rapid access to a variety of aural and audiovisual resource materials. This in turn has demanded the development of a flexible, reliable system of hardware designed to aid existing distribution methods in providing such access. The system must be…

  10. Peptide and protein biomarkers for type 1 diabetes mellitus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Qibin; Metz, Thomas O.

    A method for identifying persons with increased risk of developing type 1 diabetes mellitus, or having type I diabetes mellitus, utilizing selected biomarkers described herein either alone or in combination. The present disclosure allows for broad based, reliable, screening of large population bases. Also provided are arrays and kits that can be used to perform such methods.

  11. Peptide and protein biomarkers for type 1 diabetes mellitus

    DOEpatents

    Zhang, Qibin; Metz, Thomas O.

    2014-06-10

    A method for identifying persons with increased risk of developing type 1 diabetes mellitus, or having type I diabetes mellitus, utilizing selected biomarkers described herein either alone or in combination. The present disclosure allows for broad based, reliable, screening of large population bases. Also provided are arrays and kits that can be used to perform such methods.

  12. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  13. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    NASA Astrophysics Data System (ADS)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  14. Inter-examiner classification reliability of Mechanical Diagnosis and Therapy for extremity problems - Systematic review.

    PubMed

    Takasaki, Hiroshi; Okuyama, Kousuke; Rosedale, Richard

    2017-02-01

    Mechanical Diagnosis and Therapy (MDT) is used in the treatment of extremity problems. Classifying clinical problems is one method of providing effective treatment to a target population. Classification reliability is a key factor to determine the precise clinical problem and to direct an appropriate intervention. To explore inter-examiner reliability of the MDT classification for extremity problems in three reliability designs: 1) vignette reliability using surveys with patient vignettes, 2) concurrent reliability, where multiple assessors decide a classification by observing someone's assessment, 3) successive reliability, where multiple assessors independently assess the same patient at different times. Systematic review with data synthesis in a quantitative format. Agreement of MDT subgroups was examined using the Kappa value, with the operational definition of acceptable reliability set at ≥ 0.6. The level of evidence was determined considering the methodological quality of the studies. Six studies were included and all studies met the criteria for high quality. Kappa values for the vignette reliability design (five studies) were ≥ 0.7. There was data from two cohorts in one study for the concurrent reliability design and the Kappa values ranged from 0.45 to 1.0. Kappa values for the successive reliability design (data from three cohorts in one study) were < 0.6. The current review found strong evidence of acceptable inter-examiner reliability of MDT classification for extremity problems in the vignette reliability design, limited evidence of acceptable reliability in the concurrent reliability design and unacceptable reliability in the successive reliability design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Objective measurements of excess skin in post bariatric patients--inter-rater reliability.

    PubMed

    Biörserud, Christina; Fagevik Olsén, Monika; Elander, Anna; Wiklund, Malin

    2016-01-01

    An ability to reliably assess excess skin after massive weight loss using well-described and transferrable methods is important. The aim of this trial was to evaluate inter-rater reliability of ptosis and circumference measurements in patients with excess skin after bariatric surgery. Twenty-five postbariatric patients were included in the study, and their excess skin was measured 18 months after surgery. A protocol was designed to measure excess skin in a standardised way. To evaluate the inter-rater reliability in the measuring protocol, all patients were measured twice, by a specialist nurse and a specialist physiotherapist. All circumference measurements on different body parts had an ICC > 0.9, indicating high reliability. Furthermore, all breast and abdominal ptosis measurements had high reliability. In contrast, visual evaluation of abdominal ptosis had poor reliability. Measurements of ptoses on different body parts had an ICC > 0.6. There were no systematic differences between the results of the two testers, except for measurements of the buttocks and maximal knee circumference. The measuring protocol presented in this study has high reliability and, therefore, represents a useful instrument to provide a consistent and objective assessment of excess skin in the postbariatric patient.

  16. NASA EEE Parts and Advanced Interconnect Program (AIP)

    NASA Technical Reports Server (NTRS)

    Gindorf, T.; Garrison, A.

    1996-01-01

    none given From Program Objectives: I. Accelerate the readiness of new technologies through development of validation, assessment and test method/tools II. Provide NASA Projects infusion paths for emerging technologies III. Provide NASA Projects technology selection, application and validation guidelines for harware and processes IV. Disseminate quality assurance, reliability, validation, tools and availability information to the NASA community.

  17. A feasibility study on embedded micro-electromechanical sensors and systems (MEMS) for monitoring highway structures.

    DOT National Transportation Integrated Search

    2011-06-01

    Micro-electromechanical systems (MEMS) provide vast improvements over existing sensing methods in the context of structural health monitoring (SHM) of highway infrastructure systems, including improved system reliability, improved longevity and enhan...

  18. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  19. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  20. Reliable femoral frame construction based on MRI dedicated to muscles position follow-up.

    PubMed

    Dubois, G; Bonneau, D; Lafage, V; Rouch, P; Skalli, W

    2015-10-01

    In vivo follow-up of muscle shape variation represents a challenge when evaluating muscle development due to disease or treatment. Recent developments in muscles reconstruction techniques indicate MRI as a clinical tool for the follow-up of the thigh muscles. The comparison of 3D muscles shape from two different sequences is not easy because there is no common frame. This study proposes an innovative method for the reconstruction of a reliable femoral frame based on the femoral head and both condyles centers. In order to robustify the definition of condylar spheres, an original method was developed to combine the estimation of diameters of both condyles from the lateral antero-posterior distance and the estimation of the spheres center from an optimization process. The influence of spacing between MR slices and of origin positions was studied. For all axes, the proposed method presented an angular error lower than 1° with spacing between slice of 10 mm and the optimal position of the origin was identified at 56 % of the distance between the femoral head center and the barycenter of both condyles. The high reliability of this method provides a robust frame for clinical follow-up based on MRI .

  1. Extracting More Information from Passive Optical Tracking Observations for Reliable Orbit Element Generation

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Gehly, S.

    2016-09-01

    This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).

  2. Mallard age and sex determination from wings

    USGS Publications Warehouse

    Carney, S.M.; Geis, A.D.

    1960-01-01

    This paper describes characters on the wing plumage of the mallard that indicate age and sex. A key outlines a logical order in which to check age and sex characters on wings. This method was tested and found to be more than 95 percent reliable, although it was found that considerable practice and training with known-age specimens was required to achieve this level of accuracy....The implications of this technique and the sampling procedure it permits are discussed. Wing collections could provide information on production, and, if coupled with a banding program could permit seasonal population estimates to be calculated. In addition, representative samples of wings would provide data to check the reliability of several other waterfowl surveys.

  3. Geotechnical Descriptions of Rock and Rock Masses.

    DTIC Science & Technology

    1985-04-01

    determined in the field on core speci ns by the standard Rock Testing Handbook Methods . afls GA DTIC TAB thannounod 13 Justifiatlo By Distributin...to provide rock strength descriptions from the field. The point-load test has proven to be a reliable method of determining rock strength properties...report should qualify the reported spacing values by stating the methods used to determine spacing. Preferably the report should make the determination

  4. Joint Doctrine for Operations in Nuclear, Biological, and Chemical (NBC) Environments

    DTIC Science & Technology

    2000-07-11

    groups may have or be able to acquire military, civilian, and dual-use technologies and methods that provide adequate reliability for selective...procedures, and methods . •• Patient decontamination reduces the threat of contamination-related injury to health service support (HSS) personnel and...passing warnings to workers and units throughout their sites. •• Because of the variety of delivery methods for NBC weapons and the limitations of

  5. Ensemble variant interpretation methods to predict enzyme activity and assign pathogenicity in the CAGI4 NAGLU (Human N-acetyl-glucosaminidase) and UBE2I (Human SUMO-ligase) challenges.

    PubMed

    Yin, Yizhou; Kundu, Kunal; Pal, Lipika R; Moult, John

    2017-09-01

    CAGI (Critical Assessment of Genome Interpretation) conducts community experiments to determine the state of the art in relating genotype to phenotype. Here, we report results obtained using newly developed ensemble methods to address two CAGI4 challenges: enzyme activity for population missense variants found in NAGLU (Human N-acetyl-glucosaminidase) and random missense mutations in Human UBE2I (Human SUMO E2 ligase), assayed in a high-throughput competitive yeast complementation procedure. The ensemble methods are effective, ranked second for SUMO-ligase and third for NAGLU, according to the CAGI independent assessors. However, in common with other methods used in CAGI, there are large discrepancies between predicted and experimental activities for a subset of variants. Analysis of the structural context provides some insight into these. Post-challenge analysis shows that the ensemble methods are also effective at assigning pathogenicity for the NAGLU variants. In the clinic, providing an estimate of the reliability of pathogenic assignments is the key. We have also used the NAGLU dataset to show that ensemble methods have considerable potential for this task, and are already reliable enough for use with a subset of mutations. © 2017 Wiley Periodicals, Inc.

  6. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    NASA Astrophysics Data System (ADS)

    William, R. K.; Stillwell, A. S.

    2015-12-01

    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  7. Comparison of two methods for cardiac output measurement in critically ill patients.

    PubMed

    Saraceni, E; Rossi, S; Persona, P; Dan, M; Rizzi, S; Meroni, M; Ori, C

    2011-05-01

    The aim of recent haemodynamic monitoring has been to obtain continuous and reliable measures of cardiac output (CO) and indices of preload responsiveness. Many of these methods are based on the arterial pressure waveform analysis. The aim of our study was to assess the accuracy of CO measurements obtained by FloTrac/Vigileo, software version 1.07 and the new version 1.10 (Edwards Lifesciences LLC, Irvine, CA, USA), compared with CO measurements obtained by bolus thermodilution by pulmonary artery catheterization (PAC) in the intensive care setting. In 21 critically ill patients (enrolled in two University Hospitals), requiring invasive haemodynamic monitoring, PAC and FloTrac/Vigileo transducers connected to the arterial pressure line were placed. Simultaneous measurements of CO by two methods (FloTrac/Vigileo and thermodilution) were obtained three times a day for 3 consecutive days, when possible. The level of concordance between the two methods was assessed by the procedure suggested by Bland and Altman. One hundred and forty-one pairs of measurements (provided by thermodilution and by both 1.07 and 1.10 FloTrac/Vigileo versions) were obtained in 21 patients (seven of them were trauma patients) with a mean (sd) age of 59 (16) yr. The Pearson product moment coefficient was 0.62 (P<0.001). The bias was -0.18 litre min(-1). The limits of agreement were 4.54 and -4.90 litre min(-1), respectively. Our data show a poor level of concordance between measures provided by the two methods. We found an underestimation of CO values measured with the 1.07 software version of FloTrac for supranormal values of CO. The new software (1.10) has been improved in order to correct this bias; however, its reliability is still poor. On the basis of our data, we can therefore conclude that both software versions of FloTrac/Vigileo did not still provide reliable estimation of CO in our intensive care unit setting.

  8. A radio-aware routing algorithm for reliable directed diffusion in lossy wireless sensor networks.

    PubMed

    Kim, Yong-Pyo; Jung, Euihyun; Park, Yong-Jin

    2009-01-01

    In Wireless Sensor Networks (WSNs), transmission errors occur frequently due to node failure, battery discharge, contention or interference by objects. Although Directed Diffusion has been considered as a prominent data-centric routing algorithm, it has some weaknesses due to unexpected network errors. In order to address these problems, we proposed a radio-aware routing algorithm to improve the reliability of Directed Diffusion in lossy WSNs. The proposed algorithm is aware of the network status based on the radio information from MAC and PHY layers using a cross-layer design. The cross-layer design can be used to get detailed information about current status of wireless network such as a link quality or transmission errors of communication links. The radio information indicating variant network conditions and link quality was used to determine an alternative route that provides reliable data transmission under lossy WSNs. According to the simulation result, the radio-aware reliable routing algorithm showed better performance in both grid and random topologies with various error rates. The proposed solution suggested the possibility of providing a reliable transmission method for QoS requests in lossy WSNs based on the radio-awareness. The energy and mobility issues will be addressed in the future work.

  9. Online registration of monthly sports participation after anterior cruciate ligament injury: a reliability and validity study.

    PubMed

    Grindem, Hege; Eitzen, Ingrid; Snyder-Mackler, Lynn; Risberg, May Arna

    2014-05-01

    The current methods measuring sports activity after anterior cruciate ligament (ACL) injury are commonly restricted to the most knee-demanding sports, and do not consider participation in multiple sports. We therefore developed an online activity survey to prospectively record the monthly participation in all major sports relevant to our patient-group. To assess the reliability, content validity and concurrent validity of the survey and to evaluate if it provided more complete data on sports participation than a routine activity questionnaire. 145 consecutively included ACL-injured patients were eligible for the reliability study. The retest of the online activity survey was performed 2 days after the test response had been recorded. A subsample of 88 ACL-reconstructed patients was included in the validity study. The ACL-reconstructed patients completed the online activity survey from the first to the 12th postoperative month, and a routine activity questionnaire 6 and 12 months postoperatively. The online activity survey was highly reliable (κ ranging from 0.81 to 1). It contained all the common sports reported on the routine activity questionnaire. There was a substantial agreement between the two methods on return to preinjury main sport (κ=0.71 and 0.74 at 6 and 12 months postoperatively). The online activity survey revealed that a significantly higher number of patients reported to participate in running, cycling and strength training, and patients reported to participate in a greater number of sports. The online activity survey is a highly reliable way of recording detailed changes in sports participation after ACL injury. The findings of this study support the content and concurrent validity of the survey, and suggest that the online activity survey can provide more complete data on sports participation than a routine activity questionnaire.

  10. Integrating field methodology and web-based data collection to assess the reliability of the Alcohol Use Disorders Identification Test (AUDIT).

    PubMed

    Celio, Mark A; Vetter-O'Hagen, Courtney S; Lisman, Stephen A; Johansen, Gerard E; Spear, Linda P

    2011-12-01

    Field methodologies offer a unique opportunity to collect ecologically valid data on alcohol use and its associated problems within natural drinking environments. However, limitations in follow-up data collection methods have left unanswered questions regarding the psychometric properties of field-based measures. The aim of the current study is to evaluate the reliability of self-report data collected in a naturally occurring environment - as indexed by the Alcohol Use Disorders Identification Test (AUDIT) - compared to self-report data obtained through an innovative web-based follow-up procedure. Individuals recruited outside of bars (N=170; mean age=21; range 18-32) provided a BAC sample and completed a self-administered survey packet that included the AUDIT. BAC feedback was provided anonymously through a dedicated web page. Upon sign in, follow-up participants (n=89; 52%) were again asked to complete the AUDIT before receiving their BAC feedback. Reliability analyses demonstrated that AUDIT scores - both continuous and dichotomized at the standard cut-point - were stable across field- and web-based administrations. These results suggest that self-report data obtained from acutely intoxicated individuals in naturally occurring environments are reliable when compared to web-based data obtained after a brief follow-up interval. Furthermore, the results demonstrate the feasibility, utility, and potential of integrating field methods and web-based data collection procedures. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Report of the Federation of European Laboratory Animal Science Associations Working Group on animal identification.

    PubMed

    Dahlborn, K; Bugnon, P; Nevalainen, T; Raspa, M; Verbost, P; Spangenberg, E

    2013-01-01

    The primary aim of this report is to assist scientists in selecting more reliable/suitable identification (ID) methods for their studies. This is especially true for genetically altered (GA) animals where individual identification is strictly necessary to link samples, research design and genotype. The aim of this Federation of European Laboratory Animal Science Associations working group was to provide an update of the methods used to identify rodents in different situations and to assess their implications for animal welfare. ID procedures are an indispensable prerequisite for conducting good science but the degree of invasiveness differs between the different methods; therefore, one needs to make a good ethical evaluation of the method chosen. Based on the scientific literature the advantages and disadvantages of various methods have been presented comprehensively and this report is intended as a practical guide for researchers. New upcoming methods have been included next to the traditional techniques. Ideally, an ID method should provide reliable identification, be technically easy to apply and not inflict adverse effects on animals while taking into account the type of research. There is no gold standard method because each situation is unique; however, more studies are needed to better evaluate ID systems and the desirable introduction of new and modern approaches will need to be assessed by detailed scientific evaluation.

  12. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  13. Sunspot Positions and Areas from Observations by Galileo Galilei

    NASA Astrophysics Data System (ADS)

    Vokhmyanin, M. V.; Zolotova, N. V.

    2018-02-01

    Sunspot records in the seventeenth century provide important information on the solar activity before the Maunder minimum, yielding reliable sunspot indices and the solar butterfly diagram. Galilei's letters to Cardinal Francesco Barberini and Marcus Welser contain daily solar observations on 3 - 11 May, 2 June - 8 July, and 19 - 21 August 1612. These historical archives do not provide the time of observation, which results in uncertainty in the sunspot coordinates. To obtain them, we present a method that minimizes the discrepancy between the sunspot latitudes. We provide areas and heliographic coordinates of 82 sunspot groups. In contrast to Sheiner's butterfly diagram, we found only one sunspot group near the Equator. This provides a higher reliability of Galilei's drawings. Large sunspot groups are found to emerge at the same longitude in the northern hemisphere from 3 May to 21 August, which indicates an active longitude.

  14. Evaluation of capillary zone electrophoresis for the determination of protein composition in therapeutic immunoglobulins and human albumins.

    PubMed

    Christians, Stefan; van Treel, Nadine Denise; Bieniara, Gabriele; Eulig-Wien, Annika; Hanschmann, Kay-Martin; Giess, Siegfried

    2016-07-01

    Capillary zone electrophoresis (CZE) provides an alternative means of separating native proteins on the basis of their inherent electrophoretic mobilities. The major advantage of CZE is the quantification by UV detection, circumventing the drawbacks of staining and densitometry in the case of gel electrophoresis methods. The data of this validation study showed that CZE is a reliable assay for the determination of protein composition in therapeutic preparations of human albumin and human polyclonal immunoglobulins. Data obtained by CZE are in line with "historical" data obtained by the compendial method, provided that peak integration is performed without time correction. The focus here was to establish a rapid and reliable test to substitute the current gel based zone electrophoresis techniques for the control of protein composition of human immunoglobulins or albumins in the European Pharmacopoeia. We believe that the more advanced and modern CZE method described here is a very good alternative to the procedures currently described in the relevant monographs. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  15. Development and preliminary validation of a questionnaire to measure satisfaction with home care in Greece: an exploratory factor analysis of polychoric correlations

    PubMed Central

    2010-01-01

    Background The primary aim of this study was to develop and psychometrically test a Greek-language instrument for measuring satisfaction with home care. The first empirical evidence about the level of satisfaction with these services in Greece is also provided. Methods The questionnaire resulted from literature search, on-site observation and cognitive interviews. It was applied in 2006 to a sample of 201 enrollees of five home care programs in the city of Thessaloniki and contains 31 items that measure satisfaction with individual service attributes and are expressed on a 5-point Likert scale. The latter has been usually considered in practice as an interval scale, although it is in principle ordinal. We thus treated the variable as an ordinal one, but also employed the traditional approach in order to compare the findings. Our analysis was therefore based on ordinal measures such as the polychoric correlation, Kendall's Tau b coefficient and ordinal Cronbach's alpha. Exploratory factor analysis was followed by an assessment of internal consistency reliability, test-retest reliability, construct validity and sensitivity. Results Analyses with ordinal and interval scale measures produced in essence very similar results and identified four multi-item scales. Three of these were found to be reliable and valid: socioeconomic change, staff skills and attitudes and service appropriateness. A fourth dimension -service planning- had lower internal consistency reliability and yet very satisfactory test-retest reliability, construct validity and floor and ceiling effects. The global satisfaction scale created was also quite reliable. Overall, participants were satisfied -yet not very satisfied- with home care services. More room for improvement seems to exist for the socio-economic and planning aspects of care and less for staff skills and attitudes and appropriateness of provided services. Conclusions The methods developed seem to be a promising tool for the measurement of home care satisfaction in Greece. PMID:20602759

  16. Refinements of the attending equations for several spectral methods that provide improved quantification of B-carotene and/or lycopene in selected foods

    USDA-ARS?s Scientific Manuscript database

    Developing and maintaining maximal levels of carotenoids in fruits and vegetables that contain them is a concern of the produce industry. Toward this end, reliable methods for quantifying lycopene and B-carotene, two of the major health-enhancing carotenoids, are necessary. The goal of this resear...

  17. Unlocking the Barite Paleoproductivity Proxy: Using a New Barite Extraction Method to Understand Productivity Trends During the Eocene Greenhouse

    NASA Astrophysics Data System (ADS)

    House, B. M.; Norris, R. D.

    2017-12-01

    The Early Eocene Climatic Optimum (EECO) around 50 Ma was a sustained period of extreme global warmth with ocean bottom water temperatures of up to 12° C. The marine biologic response to such climatic extremes is unclear, however, in part because proxies that integrate ecosystem-wide productivity signals are scarce. While the accumulation of marine barite (BaSO4) is one such proxy, its applicability has remained limited due to the difficulty in reliably quantifying barite. Discrete measurements of barite content in marine sediments are laborious, and indirect estimates provide unclear results. We have developed a fast, high-throughput method for reliable measurement of barite content that relies on selective extraction of barite rather than sample digestion and quantification of remaining barite. Tests of the new method reveal that it gives the expected results for a wide variety of sediment types and can quantitatively extract 10-100 times the amount of barite typically encountered in natural sediments. Altogether, our method provides an estimated ten-fold increase in analysis efficiency over current sample digestion methods and also works reliably on small ( 1 g or less) sediment samples. Furthermore, the instrumentation requirements of this method are minor, so samples can be analyzed in shipboard labs to generate real-time paleoproductivity records during coring expeditions. Because of the magnitude of throughput improvement, this new technique will permit the generation of large datasets needed to address previously intractable paleoclimate and paleoceanographic questions. One such question is how export productivity changes during climatic extremes. We used our new method to analyze globally distributed sediment cores to determine if the EECO represented a period of anomalous export productivity either due to higher rates of primary production or more vigorous heterotrophic metabolisms. An increase in export productivity could provide a mechanism for exiting periods of extreme warmth, and understanding the interplay between temperature, atmospheric CO2 levels, and export productivity during the EECO will help clarify how the marine biologic system functions as a whole.

  18. Use of short messaging services to assess depressive symptoms among refugees in South Africa: Implications for social services providing mental health care in resource-poor settings

    PubMed Central

    Tomita, Andrew; Kandolo, Ka Muzombo; Susser, Ezra; Burns, Jonathan K

    2016-01-01

    Few studies in developing nations have assessed the use of short messaging services (SMS) to identify psychological challenges in refugee populations. This study aimed to assess the feasibility of SMS-based methods to screen for depression risk among refugees in South Africa attending mental health services, and to compare its reliability and acceptability with face-to-face consultation. Of the 153 refugees enrolled at baseline, 135 were available for follow-up assessments in our cohort study. Depression symptomatology was assessed using the 16-item Quick Inventory of Depressive Symptomatology (QIDS) instrument. Nearly everyone possessed a mobile phone and utilized SMS. Furthermore, low incomplete item response in QIDS and high perceived ease of interacting via SMS with service providers supported the feasibility of this method. There was a fair level of reliability between face-to-face and SMS-based screening methods, but no significant difference in preference rating between the two methods. Despite potential implementation barriers (network delay/phone theft), depression screening using SMS may be viable for refugee mental health services in low-resource settings. PMID:26407989

  19. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    PubMed Central

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  20. The multiple mini-interview for selecting medical residents: first experience in the Middle East region.

    PubMed

    Ahmed, Ashraf; Qayed, Khalil Ibrahim; Abdulrahman, Mahera; Tavares, Walter; Rosenfeld, Jack

    2014-08-01

    Numerous studies have shown that multiple mini-interviews (MMI) provides a standard, fair, and more reliable method for assessing applicants. This article presents the first MMI experience for selection of medical residents in the Middle East culture and an Arab country. In 2012, we started using the MMI in interviewing applicants to the residency program of Dubai Health Authority. This interview process consisted of eight, eight-minute structured interview scenarios. Applicants rotated through the stations, each with its own interviewer and scenario. They read the scenario and were requested to discuss the issues with the interviewers. Sociodemographic and station assessment data provided for each applicant were analyzed to determine whether the MMI was a reliable assessment of the non-clinical attributes in the present setting of an Arab country. One hundred and eighty-seven candidates from 27 different countries were interviewed for Dubai Residency Training Program using MMI. They were graduates of 5 medical universities within United Arab Emirates (UAE) and 60 different universities outside UAE. With this applicant's pool, a MMI with eight stations, produced absolute and relative reliability of 0.8 and 0.81, respectively. The person × station interaction contributed 63% of the variance components, the person contributed 34% of the variance components, and the station contributed 2% of the variance components. The MMI has been used in numerous universities in English speaking countries. The MMI evaluates non-clinical attributes and this study provides further evidence for its reliability but in a different country and culture. The MMI offers a fair and more reliable assessment of applicants to medical residency programs. The present data show that this assessment technique applied in a non-western country and Arab culture still produced reliable results.

  1. HitPredict version 4: comprehensive reliability scoring of physical protein-protein interactions from more than 100 species.

    PubMed

    López, Yosvany; Nakai, Kenta; Patil, Ashwini

    2015-01-01

    HitPredict is a consolidated resource of experimentally identified, physical protein-protein interactions with confidence scores to indicate their reliability. The study of genes and their inter-relationships using methods such as network and pathway analysis requires high quality protein-protein interaction information. Extracting reliable interactions from most of the existing databases is challenging because they either contain only a subset of the available interactions, or a mixture of physical, genetic and predicted interactions. Automated integration of interactions is further complicated by varying levels of accuracy of database content and lack of adherence to standard formats. To address these issues, the latest version of HitPredict provides a manually curated dataset of 398 696 physical associations between 70 808 proteins from 105 species. Manual confirmation was used to resolve all issues encountered during data integration. For improved reliability assessment, this version combines a new score derived from the experimental information of the interactions with the original score based on the features of the interacting proteins. The combined interaction score performs better than either of the individual scores in HitPredict as well as the reliability score of another similar database. HitPredict provides a web interface to search proteins and visualize their interactions, and the data can be downloaded for offline analysis. Data usability has been enhanced by mapping protein identifiers across multiple reference databases. Thus, the latest version of HitPredict provides a significantly larger, more reliable and usable dataset of protein-protein interactions from several species for the study of gene groups. Database URL: http://hintdb.hgc.jp/htp. © The Author(s) 2015. Published by Oxford University Press.

  2. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    PubMed

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Development of a diagnostic test set to assess agreement in breast pathology: practical application of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS).

    PubMed

    Oster, Natalia V; Carney, Patricia A; Allison, Kimberly H; Weaver, Donald L; Reisch, Lisa M; Longton, Gary; Onega, Tracy; Pepe, Margaret; Geller, Berta M; Nelson, Heidi D; Ross, Tyler R; Tosteson, Aanna N A; Elmore, Joann G

    2013-02-05

    Diagnostic test sets are a valuable research tool that contributes importantly to the validity and reliability of studies that assess agreement in breast pathology. In order to fully understand the strengths and weaknesses of any agreement and reliability study, however, the methods should be fully reported. In this paper we provide a step-by-step description of the methods used to create four complex test sets for a study of diagnostic agreement among pathologists interpreting breast biopsy specimens. We use the newly developed Guidelines for Reporting Reliability and Agreement Studies (GRRAS) as a basis to report these methods. Breast tissue biopsies were selected from the National Cancer Institute-funded Breast Cancer Surveillance Consortium sites. We used a random sampling stratified according to woman's age (40-49 vs. ≥50), parenchymal breast density (low vs. high) and interpretation of the original pathologist. A 3-member panel of expert breast pathologists first independently interpreted each case using five primary diagnostic categories (non-proliferative changes, proliferative changes without atypia, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma). When the experts did not unanimously agree on a case diagnosis a modified Delphi method was used to determine the reference standard consensus diagnosis. The final test cases were stratified and randomly assigned into one of four unique test sets. We found GRRAS recommendations to be very useful in reporting diagnostic test set development and recommend inclusion of two additional criteria: 1) characterizing the study population and 2) describing the methods for reference diagnosis, when applicable.

  4. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)

    NASA Technical Reports Server (NTRS)

    DeMott, Diana L.; Bigler, Mark A.

    2017-01-01

    NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.

  5. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Bigler, Mark

    2016-01-01

    NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.

  6. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  7. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  8. Dynamics of psychological crisis experience with psychological consulting by gestalt therapy methods.

    PubMed

    Fahrutdinova, Liliya Raifovna; Nugmanova, Dzhamilia Renatovna

    2015-01-01

    Dynamics of experience as such and its corporeal, emotional and cognitive elements in the situation of psychological consulting provisioning is covered. The aim of research was to study psychological crisis experience dynamics in the situation when psychological consulting by gestalt therapy methods is provided. Theoretical analysis of the problem of crisis situations, phenomenon and structural, and dynamic organization of experience of the subject of consulting have been carried out. To fulfill research project test subjects experience crisis situation have been selected, studied in the situation when they provided psychological consulting by methods of gestalt therapy, and methodology of study of crisis situations experience has been prepared. Specifics of psychological crisis experience have been revealed and its elements in different stages of psychological consulting by gestalt therapy methods. Dynamics of experience of psychological crisis and its structural elements have been revealed and reliable changes in it have been revealed. Dynamics of psychological crisis experience and its structural elements have been revealed and reliable changes in it have been revealed. "Desiccation" of experience is being observed, releasing its substantiality of negative impression to the end of consulting and development of the new experience of control over crisis situation. Interrelations of structural elements of experience in the process of psychological consulting have been shown. Effecting one structure causes reliable changes in all others structural elements of experience. Giving actual psychological help to clients in crisis situation by methods of gestalt therapy is possible as it was shown in psychological consulting sessions. Structure of client's request has been revealed - problems of personal sense are fixed as the most frequent cause of clients' applications, as well as absence of choices, obtrusiveness of negative thoughts, tend to getting stuck on events took place in the past, drawing into oneself, etc.

  9. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  10. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    PubMed

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  11. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues

    PubMed Central

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-01

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies. PMID:29316614

  12. Final Report to the National Energy Technology Laboratory on FY09-FY13 Cooperative Research with the Consortium for Electric Reliability Technology Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vittal, Vijay

    2015-11-04

    The Consortium for Electric Reliability Technology Solutions (CERTS) was formed in 1999 in response to a call from U.S. Congress to restart a federal transmission reliability R&D program to address concerns about the reliability of the U.S. electric power grid. CERTS is a partnership between industry, universities, national laboratories, and government agencies. It researches, develops, and disseminates new methods, tools, and technologies to protect and enhance the reliability of the U.S. electric power system and the efficiency of competitive electricity markets. It is funded by the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE). This reportmore » provides an overview of PSERC and CERTS, of the overall objectives and scope of the research, a summary of the major research accomplishments, highlights of the work done under the various elements of the NETL cooperative agreement, and brief reports written by the PSERC researchers on their accomplishments, including research results, publications, and software tools.« less

  13. [Reliability and validity of Driving Anger Scale in professional drivers in China].

    PubMed

    Li, Z; Yang, Y M; Zhang, C; Li, Y; Hu, J; Gao, L W; Zhou, Y X; Zhang, X J

    2017-11-10

    Objective: To assess the reliability and validity of the Chinese version of Driving Anger Scale (DAS) in professional drivers in China and provide a scientific basis for the application of the scale in drivers in China. Methods: Professional drivers, including taxi drivers, bus drivers, truck drivers and school bus drivers, were selected to complete the questionnaire. Cronbach's α and split-half reliability were calculated to evaluate the reliability of DAS, and content, contract, discriminant and convergent validity were performed to measure the validity of the scale. Results: The overall Cronbach's α of DAS was 0.934 and the split-half reliability was 0.874. The correlation coefficient of each subscale with the total scale was 0.639-0.922. The simplified version of DAS supported a presupposed six-factor structure, explaining 56.371% of the total variance revealed by exploratory factor analysis. The DAS had good convergent and discriminant validity, with the success rate of calibration experiment of 100%. Conclusion: DAS has a good reliability and validity in professional drivers in China, and the use of DAS is worth promoting in divers.

  14. Retrieving the Polar Mixed-Phase Cloud Liquid Water Path by Combining CALIOP and IIR Measurements

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Wang, Zhien; Li, Xuebin; Deng, Shumei; Huang, Yong; Wang, Yingjian

    2018-02-01

    Mixed-phase cloud (MC) is the dominant cloud type over the polar region, and there are challenging conditions for remote sensing and in situ measurements. In this study, a new methodology of retrieving the stratiform MC liquid water path (LWP) by combining Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and infrared imaging radiometer (IIR) measurements was developed and evaluated. This new methodology takes the advantage of reliable cloud-phase discrimination by combining lidar and radar measurements. An improved multiple-scattering effect correction method for lidar signals was implemented to provide reliable cloud extinction near cloud top. Then with the adiabatic cloud assumption, the MC LWP can be retrieved by a lookup-table-based method. Simulations with error-free inputs showed that the mean bias and the root mean squared error of the LWP derived from the new method are -0.23 ± 2.63 g/m2, with the mean absolute relative error of 4%. Simulations with erroneous inputs suggested that the new methodology could provide reliable retrieval of LWP to support the statistical or climatology analysis. Two-month A-train satellite retrievals over Arctic region showed that the new method can produce very similar cloud top temperature (CTT) dependence of LWP to the ground-based microwave radiometer measurements, with a bias of -0.78 g/m2 and a correlation coefficient of 0.95 between the two mean CTT-LWP relationships. The new approach can also produce reasonable pattern and value of LWP in spatial distribution over the Arctic region.

  15. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications.

    PubMed

    Wei, Yiqiao; Chen, Jingjun; Hwang, Seung-Hoon

    2018-03-02

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications.

  16. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  17. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  18. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications

    PubMed Central

    Wei, Yiqiao; Chen, Jingjun

    2018-01-01

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications. PMID:29498646

  19. Environmental Control and Life Support System Reliability for Long-Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason R.

    2014-01-01

    NASA has highlighted reliability as critical to future human space exploration, particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, no consensus has been reached on what is meant by improving on reliability, or on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project hosted a series of events at Johnson Space Center with the intended goal of establishing a common language and understanding of NASA's reliability goals, and equipping the projects with acceptable means of assessing the respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools, and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop that included members of the Environmental Control and Life Support System and AES communities. The goal of this workshop was to develop a consensus on what reliability means to AES and identify methods for assessing low- to mid-technology readiness level technologies for reliability. This paper details the results of that workshop.

  20. ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason

    2014-01-01

    Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals, and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evalauating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.

  1. ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason

    2014-01-01

    Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the Spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.

  2. The Statin-Associated Muscle Symptom Clinical Index (SAMS-CI): Revision for Clinical Use, Content Validation, and Inter-rater Reliability.

    PubMed

    Rosenson, Robert S; Miller, Kate; Bayliss, Martha; Sanchez, Robert J; Baccara-Dinet, Marie T; Chibedi-De-Roche, Daniela; Taylor, Beth; Khan, Irfan; Manvelian, Garen; White, Michelle; Jacobson, Terry A

    2017-04-01

    The Statin-Associated Muscle Symptom Clinical Index (SAMS-CI) is a method for assessing the likelihood that a patient's muscle symptoms (e.g., myalgia or myopathy) were caused or worsened by statin use. The objectives of this study were to prepare the SAMS-CI for clinical use, estimate its inter-rater reliability, and collect feedback from physicians on its practical application. For content validity, we conducted structured in-depth interviews with its original authors as well as with a panel of independent physicians. Estimation of inter-rater reliability involved an analysis of 30 written clinical cases which were scored by a sample of physicians. A separate group of physicians provided feedback on the clinical use of the SAMS-CI and its potential utility in practice. Qualitative interviews with providers supported the content validity of the SAMS-CI. Feedback on the clinical use of the SAMS-CI included several perceived benefits (such as brevity, clear wording, and simple scoring process) and some possible concerns (workflow issues and applicability in primary care). The inter-rater reliability of the SAMS-CI was estimated to be 0.77 (confidence interval 0.66-0.85), indicating high concordance between raters. With additional provider feedback, a revised SAMS-CI instrument was created suitable for further testing, both in the clinical setting and in prospective validation studies. With standardized questions, vetted language, easily interpreted scores, and demonstrated reliability, the SAMS aims to estimate the likelihood that a patient's muscle symptoms were attributable to statins. The SAMS-CI may support better detection of statin-associated muscle symptoms in clinical practice, optimize treatment for patients experiencing muscle symptoms, and provide a useful tool for further clinical research.

  3. Measuring decision quality: psychometric evaluation of a new instrument for breast cancer surgery

    PubMed Central

    2012-01-01

    Background The purpose of this paper is to examine the acceptability, feasibility, reliability and validity of a new decision quality instrument that assesses the extent to which patients are informed and receive treatments that match their goals. Methods Cross-sectional mail survey of recent breast cancer survivors, providers and healthy controls and a retest survey of survivors. The decision quality instrument includes knowledge questions and a set of goals, and results in two scores: a breast cancer surgery knowledge score and a concordance score, which reflects the percentage of patients who received treatments that match their goals. Hypotheses related to acceptability, feasibility, discriminant validity, content validity, predictive validity and retest reliability of the survey instrument were examined. Results We had responses from 440 eligible patients, 88 providers and 35 healthy controls. The decision quality instrument was feasible to implement in this study, with low missing data. The knowledge score had good retest reliability (intraclass correlation coefficient = 0.70) and discriminated between providers and patients (mean difference 35%, p < 0.001). The majority of providers felt that the knowledge items covered content that was essential for the decision. Five of the 6 treatment goals met targets for content validity. The five goals had moderate to strong retest reliability (0.64 to 0.87). The concordance score was 89%, indicating that a majority had treatments concordant with that predicted by their goals. Patients who had concordant treatment had similar levels of confidence and regret as those who did not. Conclusions The decision quality instrument met the criteria of feasibility, reliability, discriminant and content validity in this sample. Additional research to examine performance of the instrument in prospective studies and more diverse populations is needed. PMID:22681763

  4. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  5. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  6. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD

    PubMed Central

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A

    2018-01-01

    Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255

  7. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  8. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  9. Molecular Alterations that Reduce Cortical Containment of Tubulin Microtentacles and their Impact on Breast Tumor Metastasis

    DTIC Science & Technology

    2010-04-01

    This technique provides a reliable method of determining cellular plasticity and metastatic potential [5, 6]. These studies are nearing completion and...directed chemotherapies could influence CTCs and high- lights the need to investigate these effects. Current methods for detecting CTCs in metastatic...understanding of how these compounds influence detached and circulating tumor cells. Materials and methods Cell culture MCF-10A human MECs were

  10. Value and Methods for Molecular Subtyping of Bacteria

    NASA Astrophysics Data System (ADS)

    Moorman, Mark; Pruett, Payton; Weidman, Martin

    Tracking sources of microbial contaminants has been a concern since the early days of commercial food processing; however, recent advances in the development of molecular subtyping methods have provided tools that allow more rapid and highly accurate determinations of these sources. Only individuals with an understanding of the molecular subtyping methods, and the epidemiological techniques used, can evaluate the reliability of a link between a food-manufacturing plant, a food, and a foodborne disease outbreak.

  11. Screening for anthracnose disease resistance in strawberry using a detached leaf assay

    USDA-ARS?s Scientific Manuscript database

    Inoculation of detached strawberry leaves with Colletotrichum species may provide a rapid, non-destructive method of identifying anthracnose resistant germplasm. The reliability and validity of assessing disease severity is critical to disease management decisions. We inoculated detached strawberr...

  12. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.

  13. Ultrasound semi-automated measurement of fetal nuchal translucency thickness based on principal direction estimation

    NASA Astrophysics Data System (ADS)

    Yoon, Heechul; Lee, Hyuntaek; Jung, Haekyung; Lee, Mi-Young; Won, Hye-Sung

    2015-03-01

    The objective of the paper is to introduce a novel method for nuchal translucency (NT) boundary detection and thickness measurement, which is one of the most significant markers in the early screening of chromosomal defects, namely Down syndrome. To improve the reliability and reproducibility of NT measurements, several automated methods have been introduced. However, the performance of their methods degrades when NT borders are tilted due to varying fetal movements. Therefore, we propose a principal direction estimation based NT measurement method to provide reliable and consistent performance regardless of both fetal positions and NT directions. At first, Radon Transform and cost function are used to estimate the principal direction of NT borders. Then, on the estimated angle bin, i.e., the main direction of NT, gradient based features are employed to find initial NT lines which are beginning points of the active contour fitting method to find real NT borders. Finally, the maximum thickness is measured from distances between the upper and lower border of NT by searching along to the orthogonal lines of main NT direction. To evaluate the performance, 89 of in vivo fetal images were collected and the ground-truth database was measured by clinical experts. Quantitative results using intraclass correlation coefficients and difference analysis verify that the proposed method can improve the reliability and reproducibility in the measurement of maximum NT thickness.

  14. The Research Diagnostic Criteria for Temporomandibular Disorders. I: overview and methodology for assessment of validity.

    PubMed

    Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O

    2010-01-01

    The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.

  15. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.

  16. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  17. A Novel Method for Quantifying Helmeted Field of View of a Spacesuit - And What It Means for Constellation

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.

    2010-01-01

    Field of view has always been a design feature paramount to helmet design, and in particular spacesuit design, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. Historically, suited field of view has been evaluated either qualitatively in parallel with design or quantitatively using various test methods and protocols. As such, oftentimes legacy suit field of view information is either ambiguous for lack of supporting data or contradictory to other field of view tests performed with different subjects and test methods. This paper serves to document a new field of view testing method that is more reliable and repeatable than its predecessors. It borrows heavily from standard ophthalmologic field of vision tests such as the Goldmann kinetic perimetry test, but is designed specifically for evaluating field of view of a spacesuit helmet. In this test, four suits utilizing three different helmet designs were tested for field of view. Not only do these tests provide more reliable field of view data for legacy and prototype helmet designs, they also provide insight into how helmet design impacts field of view and what this means for the Constellation Project spacesuit helmet, which must meet stringent field of view requirements that are more generous to the crewmember than legacy designs.

  18. Next-generation sequencing coupled with a cell-free display technology for high-throughput production of reliable interactome data

    PubMed Central

    Fujimori, Shigeo; Hirai, Naoya; Ohashi, Hiroyuki; Masuoka, Kazuyo; Nishikimi, Akihiko; Fukui, Yoshinori; Washio, Takanori; Oshikubo, Tomohiro; Yamashita, Tatsuhiro; Miyamoto-Sato, Etsuko

    2012-01-01

    Next-generation sequencing (NGS) has been applied to various kinds of omics studies, resulting in many biological and medical discoveries. However, high-throughput protein-protein interactome datasets derived from detection by sequencing are scarce, because protein-protein interaction analysis requires many cell manipulations to examine the interactions. The low reliability of the high-throughput data is also a problem. Here, we describe a cell-free display technology combined with NGS that can improve both the coverage and reliability of interactome datasets. The completely cell-free method gives a high-throughput and a large detection space, testing the interactions without using clones. The quantitative information provided by NGS reduces the number of false positives. The method is suitable for the in vitro detection of proteins that interact not only with the bait protein, but also with DNA, RNA and chemical compounds. Thus, it could become a universal approach for exploring the large space of protein sequences and interactome networks. PMID:23056904

  19. The impact of leadership and team behavior on standard of care delivered during human patient simulation: a pilot study for undergraduate medical students.

    PubMed

    Carlson, Jim; Min, Elana; Bridges, Diane

    2009-01-01

    Methodology to train team behavior during simulation has received increased attention, but standard performance measures are lacking, especially at the undergraduate level. Our purposes were to develop a reliable team behavior measurement tool and explore the relationship between team behavior and the delivery of an appropriate standard of care specific to the simulated case. Authors developed a unique team measurement tool based on previous work. Trainees participated in a simulated event involving the presentation of acute dyspnea. Performance was rated by separate raters using the team behavior measurement tool. Interrater reliability was assessed. The relationship between team behavior and the standard of care delivered was explored. The instrument proved to be reliable for this case and group of raters. Team behaviors had a positive relationship with the standard of medical care delivered specific to the simulated case. The methods used provide a possible method for training and assessing team performance during simulation.

  20. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  1. Research of vibration controlling based on programmable logic controller for electrostatic precipitator

    NASA Astrophysics Data System (ADS)

    Zhang, Zisheng; Li, Yanhu; Li, Jiaojiao; Liu, Zhiqiang; Li, Qing

    2013-03-01

    In order to improve the reliability, stability and automation of electrostatic precipitator, circuits of vibration motor for ESP and vibration control ladder diagram program are investigated using Schneider PLC with high performance and programming software of Twidosoft. Operational results show that after adopting PLC, vibration motor can run automatically; compared with traditional control system of vibration based on single-chip microcomputer, it has higher reliability, better stability and higher dust removal rate, when dust emission concentrations <= 50 mg m-3, providing a new method for vibration controlling of ESP.

  2. NASA Electronic Parts and Packaging Program

    NASA Technical Reports Server (NTRS)

    Kayali, Sammy

    2000-01-01

    NEPP program objectives are to: (1) Access the reliability of newly available electronic parts and packaging technologies for usage on NASA projects through validations, assessments, and characterizations, and the development of test methods/tools; (2)Expedite infusion paths for advanced (emerging) electronic parts and packaging technologies by evaluations of readiness for manufacturability and project usage consideration; (3) Provide NASA projects with technology selection, application, and validation guidelines for electronic parts and packaging hardware and processes; nd (4) Retain and disseminate electronic parts and packaging quality assurance, reliability validations, tools, and availability information to the NASA community.

  3. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  4. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  5. Reducing case ascertainment costs in US population studies of Alzheimer's disease, dementia, and cognitive impairment—Part 1*

    PubMed Central

    Weir, David R.; Wallace, Robert B.; Langa, Kenneth M.; Plassman, Brenda L.; Wilson, Robert S.; Bennett, David A.; Duara, Ranjan; Loewenstein, David; Ganguli, Mary; Sano, Mary

    2011-01-01

    Establishing methods for ascertainment of dementia and cognitive impairment that are accurate and also cost effective is a challenging enterprise. Large population-based studies often using administrative data sets offer relatively inexpensive but reliable estimates of severe conditions including moderate to advanced dementia that are useful for public health planning, but they can miss less severe cognitive impairment which may be the most effective point for intervention. Clinical and epidemiological cohorts, intensively assessed, provide more sensitive detection of less severe cognitive impairment but are often costly. Here, several approaches to ascertainment are evaluated for validity, reliability, and cost. In particular, the methods of ascertainment from the Health and Retirement Study (HRS) are described briefly, along with those of the Aging, Demographics, and Memory Study (ADAMS). ADAMS, a resource-intense sub-study of the HRS, was designed to provide diagnostic accuracy among persons with more advanced dementia. A proposal to streamline future ADAMS assessments is offered. Also considered are decision tree, algorithmic, and web-based approaches to diagnosis that can reduce the expense of clinical expertise and, in some contexts, can reduce the extent of data collection. These approaches are intended for intensively assessed epidemiological cohorts. The goal is valid and reliable detection with efficient and cost-effective tools. PMID:21255747

  6. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  7. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  8. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  9. Formulation and application of optimal homotopty asymptotic method to coupled differential-difference equations.

    PubMed

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential-difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit.

  10. Formulation and Application of Optimal Homotopty Asymptotic Method to Coupled Differential - Difference Equations

    PubMed Central

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential- difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit. PMID:25874457

  11. The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations.

    PubMed

    Ghanbari, Behzad

    2014-01-01

    We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.

  12. Reliability of real-time ultrasound for the assessment of transversus abdominis function.

    PubMed

    Kidd, Adrian W; Magee, Scott; Richardson, Carolyn A

    2002-07-01

    Transversus abdominis (TrA) has now been established as a key muscle for the stabilization of the lumbar spine and sacroiliac joints. Significantly, dysfunction of this muscle has also been implicated in low back pain. Real-time ultrasound (US) is a non-invasive procedure that has the potential to evaluate objectively the function of TrA. To investigate M-mode US as a reliable method of assessing TrA function. M-mode US was used to measure the width of TrA as subjects drew in their lower abdominal wall at a controlled speed to a target depth. Eleven subjects were imaged. the measures of TrA width were reliable and ranged between 3.14mm relaxed and 6.35mm contracted. The standard error of measurement ranged between 0.18mm and 0.57mm. M-mode US provides a reliable non-invasive measure of a controlled contraction of TrA.

  13. A Conceptual Design for a Reliable Optical Bus (ROBUS)

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo

    2002-01-01

    The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.

  14. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  15. Constructing the "Best" Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.; Kleinhammer, R. K.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  16. Constructing the Best Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    Kleinhammer, R. K.; Kahn, J. C.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  17. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  18. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    PubMed

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David

    2015-08-01

    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  19. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  20. Reliability and validity of the adolescent health profile-types.

    PubMed

    Riley, A W; Forrest, C B; Starfield, B; Green, B; Kang, M; Ensminger, M

    1998-08-01

    The purpose of this study was to demonstrate the preliminary reliability and validity of a set 13 profiles of adolescent health that describe distinct patterns of health and health service requirements on four domains of health. Reliability and validity were tested in four ethnically diverse population samples of urban and rural youths aged 11 to 17-years-old in public schools (N = 4,066). The reliability of the classification procedure and construct validity were examined in terms of the predicted and actual distributions of age, gender, race, socioeconomic status, and family type. School achievement, medical conditions, and the proportion of youths with a psychiatric disorder also were examined as tests of construct validity. The classification method was shown to produce consistent results across the four populations in terms of proportions of youths assigned with specific sociodemographic characteristics. Variations in health described by specific profiles showed expected relations to sociodemographic characteristics, family structure, school achievement, medical disorders, and psychiatric disorders. This taxonomy of health profile-types appears to effectively describe a set of patterns that characterize adolescent health. The profile-types provide a unique and practical method for identifying subgroups having distinct needs for health services, with potential utility for health policy and planning. Such integrative reporting methods are critical for more effective utilization of health status instruments in health resource planning and policy development.

  1. Statistical summaries of fatigue data for design purposes

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.

    1983-01-01

    Two methods are discussed for constructing a design curve on the safe side of fatigue data. Both the tolerance interval and equivalent prediction interval (EPI) concepts provide such a curve while accounting for both the distribution of the estimators in small samples and the data scatter. The EPI is also useful as a mechanism for providing necessary statistics on S-N data for a full reliability analysis which includes uncertainty in all fatigue design factors. Examples of statistical analyses of the general strain life relationship are presented. The tolerance limit and EPI techniques for defining a design curve are demonstrated. Examples usng WASPALOY B and RQC-100 data demonstrate that a reliability model could be constructed by considering the fatigue strength and fatigue ductility coefficients as two independent random variables. A technique given for establishing the fatigue strength for high cycle lives relies on an extrapolation technique and also accounts for "runners." A reliability model or design value can be specified.

  2. Tissue tonometry is a simple, objective measure for pliability of burn scar: is it reliable?

    PubMed

    Lye, Ian; Edgar, Dale W; Wood, Fiona M; Carroll, Sara

    2006-01-01

    Objective measurement of burn scar response to treatment is important to facilitate individual patient care, research, and service development. This work examines the validity and reliability of the tonometer as a means of quantifying scar pliability. Ten burn survivors were recruited into the study. Triplicate measures were taken for each of four scar and one normal skin point. The pliability score from the Vancouver Scar Scale also was used as a comparison. The tonometer demonstrated a high degree of reliability (intraclass correlation coefficients 0.91-0.94). It also was shown to provide a valid measure of pliability by quantifying decreased tissue deformation for scar (2.04 +/- 0.45 mm) compared with normal tissue (3.02 +/- 0.92 mm; t = 4.28, P = .004) and a moderate correlation with Vancouver Scar Scale scores. The tissue tonometer provides a repeatable, objective index of burn scar pliability. Using the methods described, it is a simple, clinically useful technique for monitoring an individual's scar.

  3. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    NASA Astrophysics Data System (ADS)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  4. Method of detecting genetic translocations identified with chromosomal abnormalities

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel; Tkachuk, Douglas

    2001-01-01

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyses. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML) and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar but genetically different diseases, and for many prognostic and diagnostic applications.

  5. Method of detecting genetic deletions identified with chromosomal abnormalities

    DOEpatents

    Gray, Joe W; Pinkel, Daniel; Tkachuk, Douglas

    2013-11-26

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyzes. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acids probes are typically of a complexity greater tha 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particlularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML) and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar ut genetically different diseases, and for many prognostic and diagnostic applications.

  6. Emergency First Responders' Experience with Colorimetric Detection Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandra L. Fox; Keith A. Daum; Carla J. Miller

    2007-10-01

    Nationwide, first responders from state and federal support teams respond to hazardous materials incidents, industrial chemical spills, and potential weapons of mass destruction (WMD) attacks. Although first responders have sophisticated chemical, biological, radiological, and explosive detectors available for assessment of the incident scene, simple colorimetric detectors have a role in response actions. The large number of colorimetric chemical detection methods available on the market can make the selection of the proper methods difficult. Although each detector has unique aspects to provide qualitative or quantitative data about the unknown chemicals present, not all detectors provide consistent, accurate, and reliable results. Includedmore » here, in a consumer-report-style format, we provide “boots on the ground” information directly from first responders about how well colorimetric chemical detection methods meet their needs in the field and how they procure these methods.« less

  7. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  8. Fabrication of nano-scale Cu bond pads with seal design in 3D integration applications.

    PubMed

    Chen, K N; Tsang, C K; Wu, W W; Lee, S H; Lu, J Q

    2011-04-01

    A method to fabricate nano-scale Cu bond pads for improving bonding quality in 3D integration applications is reported. The effect of Cu bonding quality on inter-level via structural reliability for 3D integration applications is investigated. We developed a Cu nano-scale-height bond pad structure and fabrication process for improved bonding quality by recessing oxides using a combination of SiO2 CMP process and dilute HF wet etching. In addition, in order to achieve improved wafer-level bonding, we introduced a seal design concept that prevents corrosion and provides extra mechanical support. Demonstrations of these concepts and processes provide the feasibility of reliable nano-scale 3D integration applications.

  9. The reliability of the Glasgow Coma Scale: a systematic review.

    PubMed

    Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R

    2016-01-01

    The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.

  10. Advances in biological dosimetry

    NASA Astrophysics Data System (ADS)

    Ivashkevich, A.; Ohnesorg, T.; Sparbier, C. E.; Elsaleh, H.

    2017-01-01

    Rapid retrospective biodosimetry methods are essential for the fast triage of persons occupationally or accidentally exposed to ionizing radiation. Identification and detection of a radiation specific molecular ‘footprint’ should provide a sensitive and reliable measurement of radiation exposure. Here we discuss conventional (cytogenetic) methods of detection and assessment of radiation exposure in comparison to emerging approaches such as gene expression signatures and DNA damage markers. Furthermore, we provide an overview of technical and logistic details such as type of sample required, time for sample preparation and analysis, ease of use and potential for a high throughput analysis.

  11. 32 CFR 644.45 - Rental value.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provided. Military necessity has required the construction of many plants which are designed for special... absence of comparable rentals of similar properties or other reliable comparative guides to value for... for the full capacity of an industrial plant as originally designed, and that this method will serve...

  12. 32 CFR 644.45 - Rental value.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provided. Military necessity has required the construction of many plants which are designed for special... absence of comparable rentals of similar properties or other reliable comparative guides to value for... for the full capacity of an industrial plant as originally designed, and that this method will serve...

  13. 32 CFR 644.45 - Rental value.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... provided. Military necessity has required the construction of many plants which are designed for special... absence of comparable rentals of similar properties or other reliable comparative guides to value for... for the full capacity of an industrial plant as originally designed, and that this method will serve...

  14. 32 CFR 644.45 - Rental value.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... provided. Military necessity has required the construction of many plants which are designed for special... absence of comparable rentals of similar properties or other reliable comparative guides to value for... for the full capacity of an industrial plant as originally designed, and that this method will serve...

  15. 32 CFR 644.45 - Rental value.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... provided. Military necessity has required the construction of many plants which are designed for special... absence of comparable rentals of similar properties or other reliable comparative guides to value for... for the full capacity of an industrial plant as originally designed, and that this method will serve...

  16. Mapping functional connectivity

    Treesearch

    Peter Vogt; Joseph R. Ferrari; Todd R. Lookingbill; Robert H. Gardner; Kurt H. Riitters; Katarzyna Ostapowicz

    2009-01-01

    An objective and reliable assessment of wildlife movement is important in theoretical and applied ecology. The identification and mapping of landscape elements that may enhance functional connectivity is usually a subjective process based on visual interpretations of species movement patterns. New methods based on mathematical morphology provide a generic, flexible,...

  17. Passive Thermal Management of Foil Bearings

    NASA Technical Reports Server (NTRS)

    Bruckner, Robert J. (Inventor)

    2015-01-01

    Systems and methods for passive thermal management of foil bearing systems are disclosed herein. The flow of the hydrodynamic film across the surface of bearing compliant foils may be disrupted to provide passive cooling and to improve the performance and reliability of the foil bearing system.

  18. Improving FHWA's Ability to Assess Highway Infrastructure Health : National Meeting Report

    DOT National Transportation Integrated Search

    2011-12-08

    The FHWA in coordination with AASHTO conducted a study to define a consistent and reliable method to document infrastructure health with a focus on pavements and bridges on the Interstate System, and to develop a framework for tools that can provide ...

  19. Methods to assess pecan scab

    USDA-ARS?s Scientific Manuscript database

    Pecan scab (Fusicladium effusum [G. Winter]) is the most important disease of pecan in the U.S. Measuring the severity of scab accurately and reliably and providing data amenable to analysis using parametric statistics is important where treatments are being compared to minimize the risk of Type II ...

  20. An enhanced cluster analysis program with bootstrap significance testing for ecological community analysis

    USGS Publications Warehouse

    McKenna, J.E.

    2003-01-01

    The biosphere is filled with complex living patterns and important questions about biodiversity and community and ecosystem ecology are concerned with structure and function of multispecies systems that are responsible for those patterns. Cluster analysis identifies discrete groups within multivariate data and is an effective method of coping with these complexities, but often suffers from subjective identification of groups. The bootstrap testing method greatly improves objective significance determination for cluster analysis. The BOOTCLUS program makes cluster analysis that reliably identifies real patterns within a data set more accessible and easier to use than previously available programs. A variety of analysis options and rapid re-analysis provide a means to quickly evaluate several aspects of a data set. Interpretation is influenced by sampling design and a priori designation of samples into replicate groups, and ultimately relies on the researcher's knowledge of the organisms and their environment. However, the BOOTCLUS program provides reliable, objectively determined groupings of multivariate data.

  1. Hyperspectral Imaging and SPA-LDA Quantitative Analysis for Detection of Colon Cancer Tissue

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Zhang, D.; Wang, Ch.; Dai, B.; Zhao, M.; Li, B.

    2018-05-01

    Hyperspectral imaging (HSI) has been demonstrated to provide a rapid, precise, and noninvasive method for cancer detection. However, because HSI contains many data, quantitative analysis is often necessary to distill information useful for distinguishing cancerous from normal tissue. To demonstrate that HSI with our proposed algorithm can make this distinction, we built a Vis-NIR HSI setup and made many spectral images of colon tissues, and then used a successive projection algorithm (SPA) to analyze the hyperspectral image data of the tissues. This was used to build an identification model based on linear discrimination analysis (LDA) using the relative reflectance values of the effective wavelengths. Other tissues were used as a prediction set to verify the reliability of the identification model. The results suggest that Vis-NIR hyperspectral images, together with the spectroscopic classification method, provide a new approach for reliable and safe diagnosis of colon cancer and could lead to advances in cancer diagnosis generally.

  2. Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.

    PubMed

    Estevis, Eduardo; Basso, Michael R; Combs, Dennis

    2012-01-01

    A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.

  3. Transponders as permanent identification markers for domestic ferrets, black-footed ferrets, and other wildlife

    USGS Publications Warehouse

    Fagerstone, Kathleen A.; Johns, Brad E.

    1987-01-01

    A 0.05-g transponder implanted subcutaneously was tested to see if it provided a reliable identification method. In laboratory tests 20 domestic ferrets (Mustela putorius furo) received transponders and were monitored for a minimum of 6 months. None showed signs of inflammation, and necropsies conducted at the end of the study showed no scar tissue or transponder migration. Seven of 23 transponders failed during the test because of leakage through the plastic case, and a glass case is now being manufactured that does not have the leakage problem. During mark-recapture studies in September and October 1985, transponders were implanted in 20 black-footed ferrets (M. nigripes), 11 of which were subsequently recaptured and 9 of which were brought into captivity; none showed signs of inflammation. Transponders provide a reliable new method for identifying hard-to-mark wildlife with a unique, permanent number than can be read with the animal in-hand or by remote equipment.

  4. Smart accelerometer

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1992-02-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  5. Smart accelerometer

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1994-05-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  6. Smart accelerometer. [vibration damage detection

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  7. A study on reliability of power customer in distribution network

    NASA Astrophysics Data System (ADS)

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin

    2017-05-01

    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  8. Assessing treatment-as-usual provided to control groups in adherence trials: Exploring the use of an open-ended questionnaire for identifying behaviour change techniques.

    PubMed

    Oberjé, Edwin J M; Dima, Alexandra L; Pijnappel, Frank J; Prins, Jan M; de Bruin, Marijn

    2015-01-01

    Reporting guidelines call for descriptions of control group support in equal detail as for interventions. However, how to assess the active content (behaviour change techniques (BCTs)) of treatment-as-usual (TAU) delivered to control groups in trials remains unclear. The objective of this study is to pre-test a method of assessing TAU in a multicentre cost-effectiveness trial of an HIV-treatment adherence intervention. HIV-nurses (N = 21) completed a semi-structured open-ended questionnaire enquiring about TAU adherence counselling. Two coders independently coded BCTs. Completeness and clarity of nurse responses, inter-coder reliabilities and the type of BCTs reported were examined. The clarity and completeness of nurse responses were adequate. Twenty-three of the 26 identified BCTs could be reliably coded (mean κ = .79; mean agreement rate = 96%) and three BCTs scored below κ = .60. Total number of BCTs reported per nurse ranged between 7 and 19 (M = 13.86, SD = 3.35). This study suggests that the TAU open-ended questionnaire is a feasible and reliable tool to capture active content of support provided to control participants in a multicentre adherence intervention trial. Considerable variability in the number of BCTs provided to control patients was observed, illustrating the importance of reliably collecting and accurately reporting control group support.

  9. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    NASA Astrophysics Data System (ADS)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  10. Concordance of DSM-IV Axis I and II diagnoses by personal and informant's interview.

    PubMed

    Schneider, Barbara; Maurer, Konrad; Sargk, Dieter; Heiskel, Harald; Weber, Bernhard; Frölich, Lutz; Georgi, Klaus; Fritze, Jürgen; Seidler, Andreas

    2004-06-30

    The validity and reliability of using psychological autopsies to diagnose a psychiatric disorder is a critical issue. Therefore, interrater and test-retest reliability of the Structured Clinical Interview for DSM-IV Axis I and Personality Disorders and the usefulness of these instruments for the psychological autopsy method were investigated. Diagnoses by informant's interview were compared with diagnoses generated by a personal interview of 35 persons. Interrater reliability and test-retest reliability were assessed in 33 and 29 persons, respectively. Chi-square analysis, kappa and intraclass correlation coefficients, and Kendall's tau were used to determine agreement of diagnoses. Kappa coefficients were above 0.84 for substance-related disorders, mood disorders, and anxiety and adjustment disorders, and above 0.65 for Axis II disorders for interrater and test-retest reliability. Agreement by personal and relative's interview generated kappa coefficients above 0.79 for most Axis I and above 0.65 for most personality disorder diagnoses; Kendall's tau for dimensional individual personality disorder scores ranged from 0.22 to 0.72. Despite of a small number of psychiatric disorders in the selected population, the present results provide support for the validity of most diagnoses obtained through the best-estimate method using the Structured Clinical Interview for DSM-IV Axis I and Personality Disorders. This instrument can be recommended as a tool for the psychological autopsy procedure in post-mortem research. Copyright 2004 Elsevier Ireland Ltd.

  11. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry

    PubMed Central

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Background: Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. Objective: To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. Materials and Methods: The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Results: Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values (R2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. Conclusions: The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. SUMMARY Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to the quality control of eight batches of commercially available samples of Noni juiceThis method is simple, sensitive, reliable, accurate, and efficient method with strong specificity, good precision, and high recovery rate and provides a reliable basis for quality control of Noni juice. Abbreviations used: HPLC-ESI-MS/MS: High-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry, LOD: Limit of detection, LOQ: Limit of quantitation, S/N: Signal-to-noise ratio, RSD: Relative standard deviations, DP: Declustering potential, CE: Collision energy, MRM: Multiple reaction monitoring, RT: Retention time. PMID:29576704

  12. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  13. Test-retest and between-site reliability in a multicenter fMRI study.

    PubMed

    Friedman, Lee; Stern, Hal; Brown, Gregory G; Mathalon, Daniel H; Turner, Jessica; Glover, Gary H; Gollub, Randy L; Lauriello, John; Lim, Kelvin O; Cannon, Tyrone; Greve, Douglas N; Bockholt, Henry Jeremy; Belger, Aysenil; Mueller, Bryon; Doty, Michael J; He, Jianchun; Wells, William; Smyth, Padhraic; Pieper, Steve; Kim, Seyoung; Kubicki, Marek; Vangel, Mark; Potkin, Steven G

    2008-08-01

    In the present report, estimates of test-retest and between-site reliability of fMRI assessments were produced in the context of a multicenter fMRI reliability study (FBIRN Phase 1, www.nbirn.net). Five subjects were scanned on 10 MRI scanners on two occasions. The fMRI task was a simple block design sensorimotor task. The impulse response functions to the stimulation block were derived using an FIR-deconvolution analysis with FMRISTAT. Six functionally-derived ROIs covering the visual, auditory and motor cortices, created from a prior analysis, were used. Two dependent variables were compared: percent signal change and contrast-to-noise-ratio. Reliability was assessed with intraclass correlation coefficients derived from a variance components analysis. Test-retest reliability was high, but initially, between-site reliability was low, indicating a strong contribution from site and site-by-subject variance. However, a number of factors that can markedly improve between-site reliability were uncovered, including increasing the size of the ROIs, adjusting for smoothness differences, and inclusion of additional runs. By employing multiple steps, between-site reliability for 3T scanners was increased by 123%. Dropping one site at a time and assessing reliability can be a useful method of assessing the sensitivity of the results to particular sites. These findings should provide guidance toothers on the best practices for future multicenter studies.

  14. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  15. Measuring the Characteristic Topography of Brain Stiffness with Magnetic Resonance Elastography

    PubMed Central

    Murphy, Matthew C.; Huston, John; Jack, Clifford R.; Glaser, Kevin J.; Senjem, Matthew L.; Chen, Jun; Manduca, Armando; Felmlee, Joel P.; Ehman, Richard L.

    2013-01-01

    Purpose To develop a reliable magnetic resonance elastography (MRE)-based method for measuring regional brain stiffness. Methods First, simulation studies were used to demonstrate how stiffness measurements can be biased by changes in brain morphometry, such as those due to atrophy. Adaptive postprocessing methods were created that significantly reduce the spatial extent of edge artifacts and eliminate atrophy-related bias. Second, a pipeline for regional brain stiffness measurement was developed and evaluated for test-retest reliability in 10 healthy control subjects. Results This technique indicates high test-retest repeatability with a typical coefficient of variation of less than 1% for global brain stiffness and less than 2% for the lobes of the brain and the cerebellum. Furthermore, this study reveals that the brain possesses a characteristic topography of mechanical properties, and also that lobar stiffness measurements tend to correlate with one another within an individual. Conclusion The methods presented in this work are resistant to noise- and edge-related biases that are common in the field of brain MRE, demonstrate high test-retest reliability, and provide independent regional stiffness measurements. This pipeline will allow future investigations to measure changes to the brain’s mechanical properties and how they relate to the characteristic topographies that are typical of many neurologic diseases. PMID:24312570

  16. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  17. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  18. Use of Anecdotal Occurrence Data in Species Distribution Models: An Example Based on the White-Nosed Coati (Nasua narica) in the American Southwest

    PubMed Central

    Frey, Jennifer K.; Lewis, Jeremy C.; Guy, Rachel K.; Stuart, James N.

    2013-01-01

    Simple Summary We evaluated the influence of occurrence records with different reliability on predicted distribution of a unique, rare mammal in the American Southwest, the white-nosed coati (Nasua narica). We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. Abstract Species distributions are usually inferred from occurrence records. However, these records are prone to errors in spatial precision and reliability. Although influence of spatial errors has been fairly well studied, there is little information on impacts of poor reliability. Reliability of an occurrence record can be influenced by characteristics of the species, conditions during the observation, and observer’s knowledge. Some studies have advocated use of anecdotal data, while others have advocated more stringent evidentiary standards such as only accepting records verified by physical evidence, at least for rare or elusive species. Our goal was to evaluate the influence of occurrence records with different reliability on species distribution models (SDMs) of a unique mammal, the white-nosed coati (Nasua narica) in the American Southwest. We compared SDMs developed using maximum entropy analysis of combined bioclimatic and biophysical variables and based on seven subsets of occurrence records that varied in reliability and spatial precision. We found that the predicted distribution of the coati based on datasets that included anecdotal occurrence records were similar to those based on datasets that only included physical evidence. Coati distribution in the American Southwest was predicted to occur in southwestern New Mexico and southeastern Arizona and was defined primarily by evenness of climate and Madrean woodland and chaparral land-cover types. Coati distribution patterns in this region suggest a good model for understanding the biogeographic structure of range margins. We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. PMID:26487405

  19. Enrichment of Female Germline Stem Cells from Mouse Ovaries Using the Differential Adhesion Method.

    PubMed

    Wu, Meng; Xiong, Jiaqiang; Ma, Lingwei; Lu, Zhiyong; Qin, Xian; Luo, Aiyue; Zhang, Jinjin; Xie, Huan; Shen, Wei; Wang, Shixuan

    2018-04-28

    The isolation and establishment of female germline stem cells (FGSCs) is controversial because of questions regarding the reliability and stability of the isolation method using antibody targeting mouse vasa homologue (MVH), and the molecular mechanism of FGSCs self-renewal remains unclear. Thus, there needs to be a simple and reliable method for sorting FGSCs to study them. We applied the differential adhesion method to enrich FGSCs (DA-FGSCs) from mouse ovaries. Through four rounds of purification and 7-9 subsequent passages, DA-FGSC lines were established. In addition, we assessed the role of the phosphoinositide-3 kinase (PI3K)-AKT pathway in regulating FGSC self-renewal. The obtained DA-FGSCs spontaneously differentiated into oocyte-like cells in vitro and formed functional eggs in vivo that were fertilized and produced healthy offspring. AKT was rapidly phosphorylated when the proliferation rate of FGSCs increased after 10 passages, and the addition of a chemical PI3K inhibitor prevented FGSCs self-renewal. Furthermore, over-expression of AKT-induced proliferation and differentiation of FGSCs, c-Myc, Oct-4 and Gdf-9 levels were increased. The differential adhesion method provides a more feasible approach and is an easier procedure to establish FGSC lines than traditional methods. The AKT pathway plays an important role in regulation of the proliferation and maintenance of FGSCs. These findings could help promote stem cell studies and provide a better understanding of causes of ovarian infertility, thereby providing potential treatments for infertility. © 2018 The Author(s). Published by S. Karger AG, Basel.

  20. System and method for leveraging human physiological traits to control microprocessor frequency

    DOEpatents

    Shye, Alex; Pan, Yan; Scholbrock, Benjamin; Miller, J. Scott; Memik, Gokhan; Dinda, Peter A; Dick, Robert P

    2014-03-25

    A system and method for leveraging physiological traits to control microprocessor frequency are disclosed. In some embodiments, the system and method may optimize, for example, a particular processor-based architecture based on, for example, end user satisfaction. In some embodiments, the system and method may determine, for example, whether their users are satisfied to provide higher efficiency, improved reliability, reduced power consumption, increased security, and a better user experience. The system and method may use, for example, biometric input devices to provide information about a user's physiological traits to a computer system. Biometric input devices may include, for example, one or more of the following: an eye tracker, a galvanic skin response sensor, and/or a force sensor.

  1. Rapid characterization of microorganisms by mass spectrometry--what can be learned and how?

    PubMed

    Fenselau, Catherine C

    2013-08-01

    Strategies for the rapid and reliable analysis of microorganisms have been sought to meet national needs in defense, homeland security, space exploration, food and water safety, and clinical diagnosis. Mass spectrometry has long been a candidate technique because it is extremely rapid and can provide highly specific information. It has excellent sensitivity. Molecular and fragment ion masses provide detailed fingerprints, which can also be interpreted. Mass spectrometry is also a broad band method--everything has a mass--and it is automatable. Mass spectrometry is a physiochemical method that is orthogonal and complementary to biochemical and morphological methods used to characterize microorganisms.

  2. Reliability apportionment approach for spacecraft solar array using fuzzy reasoning Petri net and fuzzy comprehensive evaluation

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng

    2012-07-01

    The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.

  3. Semiautomatic segmentation and follow-up of multicomponent low-grade tumors in longitudinal brain MRI studies

    PubMed Central

    Weizman, Lior; Sira, Liat Ben; Joskowicz, Leo; Rubin, Daniel L.; Yeom, Kristen W.; Constantini, Shlomi; Shofty, Ben; Bashat, Dafna Ben

    2014-01-01

    Purpose: Tracking the progression of low grade tumors (LGTs) is a challenging task, due to their slow growth rate and associated complex internal tumor components, such as heterogeneous enhancement, hemorrhage, and cysts. In this paper, the authors show a semiautomatic method to reliably track the volume of LGTs and the evolution of their internal components in longitudinal MRI scans. Methods: The authors' method utilizes a spatiotemporal evolution modeling of the tumor and its internal components. Tumor components gray level parameters are estimated from the follow-up scan itself, obviating temporal normalization of gray levels. The tumor delineation procedure effectively incorporates internal classification of the baseline scan in the time-series as prior data to segment and classify a series of follow-up scans. The authors applied their method to 40 MRI scans of ten patients, acquired at two different institutions. Two types of LGTs were included: Optic pathway gliomas and thalamic astrocytomas. For each scan, a “gold standard” was obtained manually by experienced radiologists. The method is evaluated versus the gold standard with three measures: gross total volume error, total surface distance, and reliability of tracking tumor components evolution. Results: Compared to the gold standard the authors' method exhibits a mean Dice similarity volumetric measure of 86.58% and a mean surface distance error of 0.25 mm. In terms of its reliability in tracking the evolution of the internal components, the method exhibits strong positive correlation with the gold standard. Conclusions: The authors' method provides accurate and repeatable delineation of the tumor and its internal components, which is essential for therapy assessment of LGTs. Reliable tracking of internal tumor components over time is novel and potentially will be useful to streamline and improve follow-up of brain tumors, with indolent growth and behavior. PMID:24784396

  4. High-resolution audiometry: an automated method for hearing threshold acquisition with quality control.

    PubMed

    Bian, Lin

    2012-01-01

    In clinical practice, hearing thresholds are measured at only five to six frequencies at octave intervals. Thus, the audiometric configuration cannot closely reflect the actual status of the auditory structures. In addition, differential diagnosis requires quantitative comparison of behavioral thresholds with physiological measures, such as otoacoustic emissions (OAEs) that are usually measured in higher resolution. The purpose of this research was to develop a method to improve the frequency resolution of the audiogram. A repeated-measure design was used in the study to evaluate the reliability of the threshold measurements. A total of 16 participants with clinically normal hearing and mild hearing loss were recruited from a population of university students. No intervention was involved in the study. Custom developed system and software were used for threshold acquisition with quality control (QC). With real-ear calibration and monitoring of test signals, the system provided accurate and individualized measure of hearing thresholds that were determined by an analysis based on signal detection theory (SDT). The reliability of the threshold measure was assessed by correlation and differences between the repeated measures. The audiometric configurations were diverse and unique to each individual ear. The accuracy, within-subject reliability, and between-test repeatability are relatively high. With QC, the high-resolution audiograms can be reliably and accurately measured. Hearing thresholds measured as ear canal sound pressures with higher frequency resolution can provide more customized hearing-aid fitting. The test system may be integrated with other physiological measures, such as OAEs, into a comprehensive evaluative tool. American Academy of Audiology.

  5. Predicting bioconcentration of chemicals into vegetation from soil or air using the molecular connectivity index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowdy, D.L.; McKone, T.E.; Hsieh, D.P.H.

    1995-12-31

    Bioconcentration factors (BCFs) are the ratio of chemical concentration found in an exposed organism (in this case a plant) to the concentration in an air or soil exposure medium. The authors examine here the use of molecular connectivity indices (MCIs) as quantitative structure-activity relationships (QSARS) for predicting BCFs for organic chemicals between plants and air or soil. The authors compare the reliability of the octanol-air partition coefficient (K{sub oa}) to the MC based prediction method for predicting plant/air partition coefficients. The authors also compare the reliability of the octanol/water partition coefficient (K{sub ow}) to the MC based prediction method formore » predicting plant/soil partition coefficients. The results here indicate that, relative to the use of K{sub ow} or K{sub oa} as predictors of BCFs the MC can substantially increase the reliability with which BCFs can be estimated. The authors find that the MC provides a relatively precise and accurate method for predicting the potential biotransfer of a chemical from environmental media into plants. In addition, the MC is much faster and more cost effective than direct measurements.« less

  6. Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data

    PubMed Central

    Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-01-01

    Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138

  7. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  8. Commercialization of NESSUS: Status

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Millwater, Harry R.

    1991-01-01

    A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.

  9. Apparatus, components and operating methods for circulating fluidized bed transport gasifiers and reactors

    DOEpatents

    Vimalchand, Pannalal; Liu, Guohai; Peng, Wan Wang

    2015-02-24

    The improvements proposed in this invention provide a reliable apparatus and method to gasify low rank coals in a class of pressurized circulating fluidized bed reactors termed "transport gasifier." The embodiments overcome a number of operability and reliability problems with existing gasifiers. The systems and methods address issues related to distribution of gasification agent without the use of internals, management of heat release to avoid any agglomeration and clinker formation, specific design of bends to withstand the highly erosive environment due to high solid particles circulation rates, design of a standpipe cyclone to withstand high temperature gasification environment, compact design of seal-leg that can handle high mass solids flux, design of nozzles that eliminate plugging, uniform aeration of large diameter Standpipe, oxidant injection at the cyclone exits to effectively modulate gasifier exit temperature and reduction in overall height of the gasifier with a modified non-mechanical valve.

  10. BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm

    PubMed Central

    Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan

    2016-01-01

    The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068

  11. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

  12. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  13. Ultra-short heart rate variability recording reliability: The effect of controlled paced breathing.

    PubMed

    Melo, Hiago M; Martins, Thiago C; Nascimento, Lucas M; Hoeller, Alexandre A; Walz, Roger; Takase, Emílio

    2018-06-04

    Recent studies have reported that Heart Rate Variability (HRV) indices remain reliable even during recordings shorter than 5 min, suggesting the ultra-short recording method as a valuable tool for autonomic assessment. However, the minimum time-epoch to obtain a reliable record for all HRV domains (time, frequency, and Poincare geometric measures), as well as the effect of respiratory rate on the reliability of these indices remains unknown. Twenty volunteers had their HRV recorded in a seated position during spontaneous and controlled respiratory rhythms. HRV intervals with 1, 2, and 3 min were correlated with the gold standard period (6-min duration) and the mean values of all indices were compared in the two respiratory rhythm conditions. rMSSD and SD1 were more reliable for recordings with ultra-short duration at all time intervals (r values from 0.764 to 0.950, p < 0.05) for spontaneous breathing condition, whereas the other indices require longer recording time to obtain reliable values. The controlled breathing rhythm evokes stronger r values for time domain indices (r values from 0.83 to 0.99, p < 0.05 for rMSSD), but impairs the mean values replicability of domains across most time intervals. Although the use of standardized breathing increases the correlations coefficients, all HRV indices showed an increase in mean values (t values from 3.79 to 14.94, p < 0.001) except the RR and HF that presented a decrease (t = 4.14 and 5.96, p < 0.0001). Our results indicate that proper ultra-short-term recording method can provide a quick and reliable source of cardiac autonomic nervous system assessment. © 2018 Wiley Periodicals, Inc.

  14. New design for interfacing computers to the Octopus network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sloan, L.J.

    1977-03-14

    The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.

  15. Networked Resources, Assessment and Collection Development

    ERIC Educational Resources Information Center

    Samson, Sue; Derry, Sebastian; Eggleston, Holly

    2004-01-01

    This project provides a critical evaluation of networked resources as they relate to the library's collection development policy, identifies areas of the curriculum not well represented, establishes a reliable method of assessing usage across all resources, and develops a framework of quantitative data for collection development decision making.

  16. NDE reliability and probability of detection (POD) evolution and paradigm shift

    NASA Astrophysics Data System (ADS)

    Singh, Surendra

    2014-02-01

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.

  17. Organizational readiness for implementing change: a psychometric assessment of a new measure

    PubMed Central

    2014-01-01

    Background Organizational readiness for change in healthcare settings is an important factor in successful implementation of new policies, programs, and practices. However, research on the topic is hindered by the absence of a brief, reliable, and valid measure. Until such a measure is developed, we cannot advance scientific knowledge about readiness or provide evidence-based guidance to organizational leaders about how to increase readiness. This article presents results of a psychometric assessment of a new measure called Organizational Readiness for Implementing Change (ORIC), which we developed based on Weiner’s theory of organizational readiness for change. Methods We conducted four studies to assess the psychometric properties of ORIC. In study one, we assessed the content adequacy of the new measure using quantitative methods. In study two, we examined the measure’s factor structure and reliability in a laboratory simulation. In study three, we assessed the reliability and validity of an organization-level measure of readiness based on aggregated individual-level data from study two. In study four, we conducted a small field study utilizing the same analytic methods as in study three. Results Content adequacy assessment indicated that the items developed to measure change commitment and change efficacy reflected the theoretical content of these two facets of organizational readiness and distinguished the facets from hypothesized determinants of readiness. Exploratory and confirmatory factor analysis in the lab and field studies revealed two correlated factors, as expected, with good model fit and high item loadings. Reliability analysis in the lab and field studies showed high inter-item consistency for the resulting individual-level scales for change commitment and change efficacy. Inter-rater reliability and inter-rater agreement statistics supported the aggregation of individual level readiness perceptions to the organizational level of analysis. Conclusions This article provides evidence in support of the ORIC measure. We believe this measure will enable testing of theories about determinants and consequences of organizational readiness and, ultimately, assist healthcare leaders to reduce the number of health organization change efforts that do not achieve desired benefits. Although ORIC shows promise, further assessment is needed to test for convergent, discriminant, and predictive validity. PMID:24410955

  18. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  19. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    PubMed

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .

  20. Reliability and limits of agreement of circumferential, water displacement, and optoelectronic volumetry in the measurement of upper limb lymphedema.

    PubMed

    Deltombe, T; Jamart, J; Recloux, S; Legrand, C; Vandenbroeck, N; Theys, S; Hanson, P

    2007-03-01

    We conducted a reliability comparison study to determine the intrarater and inter-rater reliability and the limits of agreement of the volume estimated by circumferential measurements using the frustum sign method and the disk model method, by water displacement volumetry, and by infrared optoelectronic volumetry in the assessment of upper limb lymphedema. Thirty women with lymphedema following axillary lymph node dissection surgery for breast cancer surgery were enrolled. In each patient, the volumes of the upper limbs were estimated by three physical therapists using circumference measurements, water displacement and optoelectronic volumetry. One of the physical therapists performed each measure twice. Intraclass correlation coefficients (ICCs), relative differences, and limits of agreement were determined. Intrarater and interrater reliability ICCs ranged from 0.94 to 1. Intrarater relative differences were 1.9% for the disk model method, 3.2% for the frustum sign model method, 2.9% for water displacement volumetry, and 1.5% for optoelectronic volumetry. Intrarater reliability was always better than interrater, except for the optoelectronic method. Intrarater and interrater limits of agreement were calculated for each technique. The disk model method and optoelectronic volumetry had better reliability than the frustum sign method and water displacement volumetry, which is usually considered to be the gold standard. In terms of low-cost, simplicity, and reliability, we recommend the disk model method as the method of choice in clinical practice. Since intrarater reliability was always better than interrater reliability (except for optoelectronic volumetry), patients should therefore, ideally, always be evaluated by the same therapist. Additionally, the limits of agreement must be taken into account when determining the response of a patient to treatment.

  1. Defining new dental phenotypes using 3-D image analysis to enhance discrimination and insights into biological processes

    PubMed Central

    Smith, Richard; Zaitoun, Halla; Coxon, Tom; Karmo, Mayada; Kaur, Gurpreet; Townsend, Grant; Harris, Edward F.; Brook, Alan

    2009-01-01

    Aims In studying aetiological interactions of genetic, epigenetic and environmental factors in normal and abnormal developments of the dentition, methods of measurement have often been limited to maximum mesio-distal and bucco-lingual crown diameters, obtained with hand-held calipers. While this approach has led to many important findings, there are potentially many other informative measurements that can be made to describe dental crown morphology. Advances in digital imaging and computer technology now offer the opportunity to define and measure new dental phenotypes in 3-D that have the potential to provide better anatomical discrimination and clearer insights into the underlying biological processes in dental development. Over recent years, image analysis in 2-D has proved to be a valuable addition to hand-measurement methods but a reliable and rapid 3-D method would increase greatly the morphological information obtainable from natural teeth and dental models. Additional measurements such as crown heights, surface contours, actual surface perimeters and areas, and tooth volumes would maximise our ability to discriminate between samples and to explore more deeply genetic and environmental contributions to observed variation. The research objectives were to investigate the limitations of existing methodologies and to develop and validate new methods for obtaining true 3-D measurements, including curvatures and volumes, in order to enhance discrimination to allow increased differentiation in studies of dental morphology and development. The validity of a new methodology for the 3-D measurement of teeth is compared against an established 2-D system. The intra- and inter-observer reliability of some additional measurements, made possible with a 3-D approach, are also tested. Methods and results From each of 20 study models, the permanent upper right lateral and upper left central incisors were separated and imaged independently by two operators using 2-D image analysis and a 3-D image analysis system. The mesio-distal (MD), labio-lingual (LL) and inciso-gingival (IG) dimensions were recorded using our 2-D system and the same projected variables were also recorded using a newly developed 3-D system for comparison. Values of Pearson's correlation coefficient between measurements obtained using the two techniques were significant at the 0.01 probability level for variables mesio-distal and incisal-gingival with labio-lingual significant at the 0.05 level for the upper left side only, confirming their comparability. For both 2-D and 3-D systems the intra- and inter-operator reliability was substantial or excellent for variables mesio-distal, labio-lingual, incisal-gingival actual and projected and actual surface area. The reliability was good for inter-operator reliability measurement of the labio-lingual dimension using 3-D. Conclusions We have developed a new 3-D laser scanning system that enables additional dental phenotypes to be defined. It has been validated against an established 2-D system and shown to provide measurements with excellent reliability, both within and between operators. This new approach provides exciting possibilities for exploring normal and abnormal variations in dental morphology and development applicable to research on genetic and environmental factors. PMID:18644585

  2. Methodology and technical requirements of the galectin-3 test for the preoperative characterization of thyroid nodules.

    PubMed

    Bartolazzi, Armando; Bellotti, Carlo; Sciacchitano, Salvatore

    2012-01-01

    In the last decade, the β-galactosyl binding protein galectin-3 has been the object of extensive molecular, structural, and functional studies aimed to clarify its biological role in cancer. Multicenter studies also contributed to discover the potential clinical value of galectin-3 expression analysis in distinguishing, preoperatively, benign from malignant thyroid nodules. As a consequence galectin-3 is receiving significant attention as tumor marker for thyroid cancer diagnosis, but some conflicting results mostly owing to methodological problems have been published. The possibility to apply preoperatively a reliable galectin-3 test method on fine needle aspiration biopsy (FNA)-derived thyroid cells represents an important achievement. When correctly applied, the method reduces consistently the gray area of thyroid FNA cytology, contributing to avoid unnecessary thyroid surgery. Although the efficacy and reliability of the galectin-3 test method have been extensively proved in several studies, its translation in the clinical setting requires well-standardized reagents and procedures. After a decade of experimental work on galectin-3-related basic and translational research projects, the major methodological problems that may potentially impair the diagnostic performance of galectin-3 immunotargeting are highlighted and discussed in detail. A standardized protocol for a reliable galectin-3 expression analysis is finally provided. The aim of this contribution is to improve the clinical management of patients with thyroid nodules, promoting the preoperative use of a reliable galectin-3 test method as ancillary technique to conventional thyroid FNA cytology. The final goal is to decrease unnecessary thyroid surgery and its related social costs.

  3. Methods to approximate reliabilities in single-step genomic evaluation

    USDA-ARS?s Scientific Manuscript database

    Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...

  4. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry.

    PubMed

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C 18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values ( R 2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to the quality control of eight batches of commercially available samples of Noni juiceThis method is simple, sensitive, reliable, accurate, and efficient method with strong specificity, good precision, and high recovery rate and provides a reliable basis for quality control of Noni juice. Abbreviations used: HPLC-ESI-MS/MS: High-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry, LOD: Limit of detection, LOQ: Limit of quantitation, S/N: Signal-to-noise ratio, RSD: Relative standard deviations, DP: Declustering potential, CE: Collision energy, MRM: Multiple reaction monitoring, RT: Retention time.

  5. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  6. An ultra-high pressure liquid chromatography-tandem mass spectrometry method for the quantification of teicoplanin in plasma of neonates.

    PubMed

    Begou, O; Kontou, A; Raikos, N; Sarafidis, K; Roilides, E; Papadoyannis, I N; Gika, H G

    2017-03-15

    The development and validation of an ultra-high pressure liquid chromatography (UHPLC) tandem mass spectrometry (MS/MS) method was performed with the aim to be applied for the quantification of plasma teicoplanin concentrations in neonates. Pharmacokinetic data of teicoplanin in the neonatal population is very limited, therefore, a sensitive and reliable method for the determination of all isoforms of teicoplanin applied in a low volume of sample is of real importance. Teicoplanin main components were extracted by a simple acetonitrile precipitation step and analysed on a C18 chromatographic column by a triple quadrupole MS with electrospray ionization. The method provides quantitative data over a linear range of 25-6400ng/mL with LOD 8.5ng/mL and LOQ 25ng/mL for total teicoplanin. The method was applied in plasma samples from neonates to support pharmacokinetic data and proved to be a reliable and fast method for the quantification of teicoplanin concentration levels in plasma of infants during therapy in Intensive Care Unit. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  8. Skeletal age estimation for forensic purposes: A comparison of GP, TW2 and TW3 methods on an Italian sample.

    PubMed

    Pinchi, Vilma; De Luca, Federica; Ricciardi, Federico; Focardi, Martina; Piredda, Valentina; Mazzeo, Elena; Norelli, Gian-Aristide

    2014-05-01

    Paediatricians, radiologists, anthropologists and medico-legal specialists are often called as experts in order to provide age estimation (AE) for forensic purposes. The literature recommends performing the X-rays of the left hand and wrist (HW-XR) for skeletal age estimation. The method most frequently employed is the Greulich and Pyle (GP) method. In addition, the so-called bone-specific techniques are also applied including the method of Tanner Whitehouse (TW) in the latest versions TW2 and TW3. To compare skeletal age and chronological age in a large sample of children and adolescents using GP, TW2 and TW3 methods in order to establish which of these is the most reliable for forensic purposes. The sample consisted of 307 HW-XRs of Italian children or adolescents, 145 females and 162 males aged between 6 and 20 years. The radiographies were scored according to the GP, TW2RUS and TW3RUS methods by one investigator. The results' reliability was assessed using intraclass correlation coefficient. Wilcoxon signed-rank test and Student t-test were performed to search for significant differences between skeletal and chronological ages. The distributions of the differences between estimated and chronological age, by means of boxplots, show how median differences for TW3 and GP methods are generally very close to 0. Hypothesis tests' results were obtained, with respect to the sex, both for the entire group of individuals and people grouped by age. Results show no significant differences among estimated and chronological age for TW3 and, to a lesser extent, GP. The TW2 proved to be the worst of the three methods. Our results support the conclusion that the TW2 method is not reliable for AE for forensic purpose. The GP and TW3 methods have proved to be reliable in males. For females, the best method was found to be TW3. When performing forensic age estimation in subjects around 14 years of age, it could be advisable to use and associate the TW3 and GP methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Elucidating unconscious processing with instrumental hypnosis

    PubMed Central

    Landry, Mathieu; Appourchaux, Krystèle; Raz, Amir

    2014-01-01

    Most researchers leverage bottom-up suppression to unlock the underlying mechanisms of unconscious processing. However, a top-down approach – for example via hypnotic suggestion – paves the road to experimental innovation and complementary data that afford new scientific insights concerning attention and the unconscious. Drawing from a reliable taxonomy that differentiates subliminal and preconscious processing, we outline how an experimental trajectory that champions top-down suppression techniques, such as those practiced in hypnosis, is uniquely poised to further contextualize and refine our scientific understanding of unconscious processing. Examining subliminal and preconscious methods, we demonstrate how instrumental hypnosis provides a reliable adjunct that supplements contemporary approaches. Specifically, we provide an integrative synthesis of the advantages and shortcomings that accompany a top-down approach to probe the unconscious mind. Our account provides a larger framework for complementing the results from core studies involving prevailing subliminal and preconscious techniques. PMID:25120504

  10. Magnetic refrigeration for maser amplifier cooling

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.

    1982-01-01

    The development of a multifrequency upconverter-maser system for the DSN has created the need to develop a closed-cycle refrigerator (CCR) capable of providing more than 3 watts of refrigeration capability at 4.5 K. In addition, operating concerns such as the high cost of electrical power consumption and the loss of maser operation due to CCR failures require that improvements be made to increase the efficiency and reliability of the CCR. One refrigeration method considered is the replacement of the Joule-Thomson expansion circuit with a magnetic refrigeration. Magnetic refrigerators can provide potentially reliable and highly efficient refrigeration at a variety of temperature ranges and cooling power. The concept of magnetic refrigeration is summarized and a literature review of existing magnetic refrigerator designs which have been built and tested and that may also be considered as possibilities as a 4 K to 15 K magnetic refrigeration stage for the DSN closed-cycle refrigerator is provided.

  11. Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?

    PubMed

    Van Loan, Marta D

    2007-12-01

    This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.

  12. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  13. A "shotgun" method for tracing the birth locations of sheep from flock tags, applied to scrapie surveillance in Great Britain.

    PubMed

    Birch, Colin P D; Del Rio Vilas, Victor J; Chikukwa, Ambrose C

    2010-09-01

    Movement records are often used to identify animal sample provenance by retracing the movements of individuals. Here we present an alternative method, which uses the same identity tags and movement records as are used to retrace movements, but ignores individual movement paths. The first step uses a simple query to identify the most likely birth holding for every identity tag included in a database recording departures from agricultural holdings. The second step rejects a proportion of the birth holding locations to leave a list of birth holding locations that are relatively reliable. The method was used to trace the birth locations of sheep sampled for scrapie in abattoirs, or on farm as fallen stock. Over 82% of the sheep sampled in the fallen stock survey died at the holding of birth. This lack of movement may be an important constraint on scrapie transmission. These static sheep provided relatively reliable birth locations, which were used to define criteria for selecting reliable traces. The criteria rejected 16.8% of fallen stock traces and 11.9% of abattoir survey traces. Two tests provided estimates that selection reduced error in fallen stock traces from 11.3% to 3.2%, and in abattoir survey traces from 8.1% to 1.8%. This method generated 14,591 accepted traces of fallen stock from samples taken during 2002-2005 and 83,136 accepted traces from abattoir samples. The absence or ambiguity of flock tag records at the time of slaughter prevented the tracing of 16-24% of abattoir samples during 2002-2004, although flock tag records improved in 2005. The use of internal scoring to generate and evaluate results from the database query, and the confirmation of results by comparison with other database fields, are analogous to methods used in web search engines. Such methods may have wide application in tracing samples and in adding value to biological datasets. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.

  14. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  15. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  16. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  17. Characterizing wind power resource reliability in southern Africa

    DOE PAGES

    Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam

    2015-08-29

    Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less

  18. Perfusion dynamics assessment with Power Doppler ultrasound in skeletal muscle during maximal and submaximal cycling exercise.

    PubMed

    Heres, H M; Schoots, T; Tchang, B C Y; Rutten, M C M; Kemps, H M C; van de Vosse, F N; Lopata, R G P

    2018-06-01

    Assessment of limitations in the perfusion dynamics of skeletal muscle may provide insight in the pathophysiology of exercise intolerance in, e.g., heart failure patients. Power doppler ultrasound (PDUS) has been recognized as a sensitive tool for the detection of muscle blood flow. In this volunteer study (N = 30), a method is demonstrated for perfusion measurements in the vastus lateralis muscle, with PDUS, during standardized cycling exercise protocols, and the test-retest reliability has been investigated. Fixation of the ultrasound probe on the upper leg allowed for continuous PDUS measurements. Cycling exercise protocols included a submaximal and an incremental exercise to maximal power. The relative perfused area (RPA) was determined as a measure of perfusion. Absolute and relative reliability of RPA amplitude and kinetic parameters during exercise (onset, slope, maximum value) and recovery (overshoot, decay time constants) were investigated. A RPA increase during exercise followed by a signal recovery was measured in all volunteers. Amplitudes and kinetic parameters during exercise and recovery showed poor to good relative reliability (ICC ranging from 0.2-0.8), and poor to moderate absolute reliability (coefficient of variation (CV) range 18-60%). A method has been demonstrated which allows for continuous (Power Doppler) ultrasonography and assessment of perfusion dynamics in skeletal muscle during exercise. The reliability of the RPA amplitudes and kinetics ranges from poor to good, while the reliability of the RPA increase in submaximal cycling (ICC = 0.8, CV = 18%) is promising for non-invasive clinical assessment of the muscle perfusion response to daily exercise.

  19. Characterizing wind power resource reliability in southern Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam

    Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less

  20. Validity and reliability of a scale to measure genital body image.

    PubMed

    Zielinski, Ruth E; Kane-Low, Lisa; Miller, Janis M; Sampselle, Carolyn

    2012-01-01

    Women's body image dissatisfaction extends to body parts usually hidden from view--their genitals. Ability to measure genital body image is limited by lack of valid and reliable questionnaires. We subjected a previously developed questionnaire, the Genital Self Image Scale (GSIS) to psychometric testing using a variety of methods. Five experts determined the content validity of the scale. Then using four participant groups, factor analysis was performed to determine construct validity and to identify factors. Further construct validity was established using the contrasting groups approach. Internal consistency and test-retest reliability was determined. Twenty one of 29 items were considered content valid. Two items were added based on expert suggestions. Factor analysis was undertaken resulting in four factors, identified as Genital Confidence, Appeal, Function, and Comfort. The revised scale (GSIS-20) included 20 items explaining 59.4% of the variance. Women indicating an interest in genital cosmetic surgery exhibited significantly lower scores on the GSIS-20 than those who did not. The final 20 item scale exhibited internal reliability across all sample groups as well as test-retest reliability. The GSIS-20 provides a measure of genital body image demonstrating reliability and validity across several populations of women.

  1. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)

    NASA Technical Reports Server (NTRS)

    Mott, Diana L.; Bigler, Mark A.

    2017-01-01

    NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.

  2. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    PubMed

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  3. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    NASA Astrophysics Data System (ADS)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-06-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  4. High Throughput Determination of Tetramine in Drinking ...

    EPA Pesticide Factsheets

    Report The sampling and analytical procedure (SAP) presented herein, describes a method for the high throughput determination of tetramethylene disulfotetramine in drinking water by solid phase extraction and isotope dilution gas chromatography/mass spectrometry. This method, which will be included in the SAM, is expected to provide the Water Laboratory Alliance, as part of EPA’s Environmental Response Laboratory Network, with a more reliable and faster means of analyte collection and measurement.

  5. New Trends in Two-Way Time and Frequency Transfer via Satellite

    DTIC Science & Technology

    1999-12-01

    Recent developments performed with SATRE two-way time transfer ( TWSTFT ) modems resulted in significant performance upgrades and operational...improvements of the TWSTFT method These are aimed to reduce : manpower effort and to provide reliable, real-time data via a centralized monitoring and...collection have been used throughout the experiment INTRODUCTION Two-Way Time and Frequency Transfer via Satellite ( TWSTFT ) is a well established method to

  6. Methods of staining target chromosomal DNA employing high complexity nucleic acid probes

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel; Kallioniemi, Ol'li-Pekka; Kallioniemi, Anne; Sakamoto, Masaru

    2006-10-03

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyses. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML), retinoblastoma, ovarian and uterine cancers, and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar but genetically different diseases, and for many prognostic and diagnostic applications.

  7. The Joint Confidence Level Paradox: A History of Denial

    NASA Technical Reports Server (NTRS)

    Butts, Glenn; Linton, Kent

    2009-01-01

    This paper is intended to provide a reliable methodology for those tasked with generating price tags on construction (C0F) and research and development (R&D) activities in the NASA performance world. This document consists of a collection of cost-related engineering detail and project fulfillment information from early agency days to the present. Accurate historical detail is the first place to start when determining improved methodologies for future cost and schedule estimating. This paper contains a beneficial proposed cost estimating method for arriving at more reliable numbers for future submits. When comparing current cost and schedule methods with earlier cost and schedule approaches, it became apparent that NASA's organizational performance paradigm has morphed. Mission fulfillment speed has slowed and cost calculating factors have increased in 21st Century space exploration.

  8. The application of the statistical theory of extreme values to gust-load problems

    NASA Technical Reports Server (NTRS)

    Press, Harry

    1950-01-01

    An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)

  9. A single-laboratory validated method for the generation of DNA barcodes for the identification of fish for regulatory compliance.

    PubMed

    Handy, Sara M; Deeds, Jonathan R; Ivanova, Natalia V; Hebert, Paul D N; Hanner, Robert H; Ormos, Andrea; Weigt, Lee A; Moore, Michelle M; Yancy, Haile F

    2011-01-01

    The U.S. Food and Drug Administration is responsible for ensuring that the nation's food supply is safe and accurately labeled. This task is particularly challenging in the case of seafood where a large variety of species are marketed, most of this commodity is imported, and processed product is difficult to identify using traditional morphological methods. Reliable species identification is critical for both foodborne illness investigations and for prevention of deceptive practices, such as those where species are intentionally mislabeled to circumvent import restrictions or for resale as species of higher value. New methods that allow accurate and rapid species identifications are needed, but any new methods to be used for regulatory compliance must be both standardized and adequately validated. "DNA barcoding" is a process by which species discriminations are achieved through the use of short, standardized gene fragments. For animals, a fragment (655 base pairs starting near the 5' end) of the cytochrome c oxidase subunit 1 mitochondrial gene has been shown to provide reliable species level discrimination in most cases. We provide here a protocol with single-laboratory validation for the generation of DNA barcodes suitable for the identification of seafood products, specifically fish, in a manner that is suitable for FDA regulatory use.

  10. Effectiveness of Visual Methods in Information Procedures for Stem Cell Recipients and Donors

    PubMed Central

    Sarıtürk, Çağla; Gereklioğlu, Çiğdem; Korur, Aslı; Asma, Süheyl; Yeral, Mahmut; Solmaz, Soner; Büyükkurt, Nurhilal; Tepebaşı, Songül; Kozanoğlu, İlknur; Boğa, Can; Özdoğu, Hakan

    2017-01-01

    Objective: Obtaining informed consent from hematopoietic stem cell recipients and donors is a critical step in the transplantation process. Anxiety may affect their understanding of the provided information. However, use of audiovisual methods may facilitate understanding. In this prospective randomized study, we investigated the effectiveness of using an audiovisual method of providing information to patients and donors in combination with the standard model. Materials and Methods: A 10-min informational animation was prepared for this purpose. In total, 82 participants were randomly assigned to two groups: group 1 received the additional audiovisual information and group 2 received standard information. A 20-item questionnaire was administered to participants at the end of the informational session. Results: A reliability test and factor analysis showed that the questionnaire was reliable and valid. For all participants, the mean overall satisfaction score was 184.8±19.8 (maximum possible score of 200). However, for satisfaction with information about written informed consent, group 1 scored significantly higher than group 2 (p=0.039). Satisfaction level was not affected by age, education level, or differences between the physicians conducting the informative session. Conclusion: This study shows that using audiovisual tools may contribute to a better understanding of the informed consent procedure and potential risks of stem cell transplantation. PMID:27476890

  11. A low-cost efficient multiplex PCR for prenatal sex determination in bovine fetus using free fetal DNA in maternal plasma.

    PubMed

    Davoudi, Arash; Seighalani, Ramin; Aleyasin, Seyed Ahmad; Tarang, Alireza; Salehi, Abdolreza Salehi; Tahmoressi, Farideh

    2012-04-01

    In order to establish a reliable non-invasive method for sex determination in a bovine fetus in a routine setting, the possibility of identifying specific sequence in the fetal X and Y-chromosomes has been evaluated in maternal plasma using conventional multiplex polymerase chain reaction (PCR) analysis. The aim of this study was to provide a rapid and reliable method for sexing bovine fetuses. In this experimental study, peripheral blood samples were taken from 38 pregnant heifers with 8 to 38 weeks of gestation. DNA template was extracted by phenol-chloroform method from 350 µl maternal plasma. Two primer pairs for bovine amelogenin gene (bAML) and BC1.2 were used to amplify fragments from X and Y chromosomes. A multiplex PCR reaction has been optimized for amplification of 467 bp and 341 bp fragments from X and Y bAML gene and a 190 bp fragment from BC1.2 related to Y chromosome. The 467 bp fragment was observed in all 38 samples. Both 341 and 190 bp fragments were detected only in 24 plasma samples from male calves. The sensitivity and specificity of test were 100% with no false negative or false positive results. The results showed that phenol-chloroform method is a simple and suitable method for isolation of fetal DNA in maternal plasma. The multiplex PCR method is an available non-invasive approach which is cost efficient and reliable for sexing bovine fetuses.

  12. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  13. Adaptation of the ToxRTool to Assess the Reliability of Toxicology Studies Conducted with Genetically Modified Crops and Implications for Future Safety Testing.

    PubMed

    Koch, Michael S; DeSesso, John M; Williams, Amy Lavin; Michalek, Suzanne; Hammond, Bruce

    2016-01-01

    To determine the reliability of food safety studies carried out in rodents with genetically modified (GM) crops, a Food Safety Study Reliability Tool (FSSRTool) was adapted from the European Centre for the Validation of Alternative Methods' (ECVAM) ToxRTool. Reliability was defined as the inherent quality of the study with regard to use of standardized testing methodology, full documentation of experimental procedures and results, and the plausibility of the findings. Codex guidelines for GM crop safety evaluations indicate toxicology studies are not needed when comparability of the GM crop to its conventional counterpart has been demonstrated. This guidance notwithstanding, animal feeding studies have routinely been conducted with GM crops, but their conclusions on safety are not always consistent. To accurately evaluate potential risks from GM crops, risk assessors need clearly interpretable results from reliable studies. The development of the FSSRTool, which provides the user with a means of assessing the reliability of a toxicology study to inform risk assessment, is discussed. Its application to the body of literature on GM crop food safety studies demonstrates that reliable studies report no toxicologically relevant differences between rodents fed GM crops or their non-GM comparators.

  14. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  15. Safety, reliability, maintainability and quality provisions for the Space Shuttle program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This publication establishes common safety, reliability, maintainability and quality provisions for the Space Shuttle Program. NASA Centers shall use this publication both as the basis for negotiating safety, reliability, maintainability and quality requirements with Shuttle Program contractors and as the guideline for conduct of program safety, reliability, maintainability and quality activities at the Centers. Centers shall assure that applicable provisions of the publication are imposed in lower tier contracts. Centers shall give due regard to other Space Shuttle Program planning in order to provide an integrated total Space Shuttle Program activity. In the implementation of safety, reliability, maintainability and quality activities, consideration shall be given to hardware complexity, supplier experience, state of hardware development, unit cost, and hardware use. The approach and methods for contractor implementation shall be described in the contractors safety, reliability, maintainability and quality plans. This publication incorporates provisions of NASA documents: NHB 1700.1 'NASA Safety Manual, Vol. 1'; NHB 5300.4(IA), 'Reliability Program Provisions for Aeronautical and Space System Contractors'; and NHB 5300.4(1B), 'Quality Program Provisions for Aeronautical and Space System Contractors'. It has been tailored from the above documents based on experience in other programs. It is intended that this publication be reviewed and revised, as appropriate, to reflect new experience and to assure continuing viability.

  16. Chromosome-specific staining to detect genetic rearrangements

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel; Tkachuk, Douglas; Westbrook, Carol

    2013-04-09

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyzes. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML) and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar but genetically different diseases, and for many prognostic and diagnostic applications.

  17. Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced

    Science.gov Websites

    Upgrading Methods | NREL Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Science and Technology Highlights Highlights in Research & Development Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Key Research Results Achievement As co

  18. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    PubMed

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  20. Structured assessment of microsurgery skills in the clinical setting.

    PubMed

    Chan, WoanYi; Niranjan, Niri; Ramakrishnan, Venkat

    2010-08-01

    Microsurgery is an essential component in plastic surgery training. Competence has become an important issue in current surgical practice and training. The complexity of microsurgery requires detailed assessment and feedback on skills components. This article proposes a method of Structured Assessment of Microsurgery Skills (SAMS) in a clinical setting. Three types of assessment (i.e., modified Global Rating Score, errors list and summative rating) were incorporated to develop the SAMS method. Clinical anastomoses were recorded on videos using a digital microscope system and were rated by three consultants independently and in a blinded fashion. Fifteen clinical cases of microvascular anastomoses performed by trainees and a consultant microsurgeon were assessed using SAMS. The consultant had consistently the highest scores. Construct validity was also demonstrated by improvement of SAMS scores of microsurgery trainees. The overall inter-rater reliability was strong (alpha=0.78). The SAMS method provides both formative and summative assessment of microsurgery skills. It is demonstrated to be a valid, reliable and feasible assessment tool of operating room performance to provide systematic and comprehensive feedback as part of the learning cycle. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. An Application of the Rasch Model to Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Wisniewski, Dennis R.

    Three questions concerning the Binary Search Method (BSM) of computerized adaptive testing were studied: (1) whether it provided a reliable and valid estimation of examinee ability; (2) its effect on examinee attitudes toward computerized adaptive testing and conventional paper-and-pencil testing; and (3) the relationship between item response…

  2. Factors Influencing Consonant Acquisition in Brazilian Portuguese-Speaking Children

    ERIC Educational Resources Information Center

    Ceron, Marizete Ilha; Gubiani, Marileda Barichello; de Oliveira, Camila Rosa; Keske-Soares, Márcia

    2017-01-01

    Purpose: We sought to provide valid and reliable data on the acquisition of consonant sounds in speakers of Brazilian Portuguese. Method: The sample comprised 733 typically developing monolingual speakers of Brazilian Portuguese (ages 3;0-8;11 [years;months]). The presence of surface speech error patterns, the revised percentage consonants…

  3. A Performance-Based Method of Student Evaluation

    ERIC Educational Resources Information Center

    Nelson, G. E.; And Others

    1976-01-01

    The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)

  4. Is School Funding Fair? A National Report Card

    ERIC Educational Resources Information Center

    Baker, Bruce D.; Sciarra, David G.; Farrie, Danielle

    2010-01-01

    Building a more accurate, reliable and consistent method of analyzing how states fund public education starts with a critical question: What is fair school funding? In this report, "fair" school funding is defined as a state finance system that ensures equal educational opportunity by providing a sufficient level of funding distributed…

  5. Current limitations and a path forward to improve testing for the environmental assessment of endocrine active substances-presentation

    EPA Science Inventory

    To assess the hazards and risks of possible endocrine active chemicals (EACs), there is a need for robust, validated test methods that detect perturbations of endocrine pathways and provide reliable information for evaluating potential adverse effects on apical endpoints. One iss...

  6. NDI (Nondestructive Inspection) Oriented Corrosion Control for Army Aircraft. Phase 1. Inspection Methods

    DTIC Science & Technology

    1989-07-01

    Appendices A and B and are provided as cover sheets from each item rather than completc packages. The Pamplet Series materials were furnished as camera-ready...34Stational Neutron Radiography System for Aircraft Reliability and Maintainability." G. A. Technologies Brochure , Triga Reactor Division, San Diego

  7. Three dimensional reliability analyses of currently used methods for assessment of sagittal jaw discrepancy

    PubMed Central

    Almaqrami, Bushra-Sufyan; Alhammadi, Maged-Sultan

    2018-01-01

    Background The objective of this study was to analyse three dimensionally the reliability and correlation of angular and linear measurements in assessment of anteroposterior skeletal discrepancy. Material and Methods In this retrospective cross sectional study, a sample of 213 subjects were three-dimensionally analysed from cone-beam computed tomography scans. The sample was divided according to three dimensional measurement of anteroposterior relation (ANB angle) into three groups (skeletal Class I, Class II and Class III). The anterior-posterior cephalometric indicators were measured on volumetric images using Anatomage software (InVivo5.2). These measurements included three angular and seven linear measurements. Cross tabulations were performed to correlate the ANB angle with each method. Intra-class Correlation Coefficient (ICC) test was applied for the difference between the two reliability measurements. P value of < 0.05 was considered significant. Results There was a statistically significant (P<0.05) agreement between all methods used with variability in assessment of different anteroposterior relations. The highest correlation was between ANB and DSOJ (0.913), strong correlation with AB/FH, AB/SN/, MM bisector, AB/PP, Wits appraisal (0.896, 0.890, 0.878, 0.867,and 0.858, respectively), moderate with AD/SN and Beta angle (0.787 and 0.760), and weak correlation with corrected ANB angle (0.550). Conclusions Conjunctive usage of ANB angle with DSOJ, AB/FH, AB/SN/, MM bisector, AB/PP and Wits appraisal in 3D cephalometric analysis provide a more reliable and valid indicator of the skeletal anteroposterior relationship. Clinical relevance: Most of orthodontic literature depends on single method (ANB) with its drawbacks in assessment of skeletal discrepancy which is a cardinal factors for proper treatment planning, this study assessed three dimensionally the degree of correlation between all available methods to make clinical judgement more accurate based on more than one method of assessment. Key words:Anteroposterior relationships, ANB angle, Three-dimension, CBCT. PMID:29750096

  8. An Unconditionally Stable, Positivity-Preserving Splitting Scheme for Nonlinear Black-Scholes Equation with Transaction Costs

    PubMed Central

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable. PMID:24895653

  9. An unconditionally stable, positivity-preserving splitting scheme for nonlinear Black-Scholes equation with transaction costs.

    PubMed

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable.

  10. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS).

    PubMed

    Stoyanov, Stoyan R; Hides, Leanne; Kavanagh, David J; Wilson, Hollie

    2016-06-10

    The Mobile Application Rating Scale (MARS) provides a reliable method to assess the quality of mobile health (mHealth) apps. However, training and expertise in mHealth and the relevant health field is required to administer it. This study describes the development and reliability testing of an end-user version of the MARS (uMARS). The MARS was simplified and piloted with 13 young people to create the uMARS. The internal consistency and test-retest reliability of the uMARS was then examined in a second sample of 164 young people participating in a randomized controlled trial of a mHealth app. App ratings were collected using the uMARS at 1-, 3,- and 6-month follow up. The uMARS had excellent internal consistency (alpha = .90), with high individual alphas for all subscales. The total score and subscales had good test-retest reliability over both 1-2 months and 3 months. The uMARS is a simple tool that can be reliably used by end-users to assess the quality of mHealth apps.

  11. Reliability of assessment of upper trapezius morphology, its mechanical properties and blood flow in female patients with myofascial pain syndrome using ultrasonography.

    PubMed

    Adigozali, Hakimeh; Shadmehr, Azadeh; Ebrahimi, Esmail; Rezasoltani, Asghar; Naderi, Farrokh

    2017-01-01

    In the present study, the intra-rater reliability of upper trapezius morphology, its mechanical properties and intramuscular blood circulation in females with myofascial pain syndrome were assessed using ultrasonography. A total of 37 patients (31.05 ± 10 years old) participated in this study. Ultrasonography producer was set up in three stages: a) Gray-scale: to measure muscle thickness, size and area of trigger points; b) Ultrasound elastography: to measure muscle stiffness; and c) Doppler imaging: to assess blood flow indices. According to data analysis, all variables, except End Diastolic Velocity (EDV), had excellent reliability (>0.806). Intra-class Correlation Coefficient (ICC) for EDV was 0.738, which was considered a poor to good reliability. The results of this study introduced a reliable method for developing details of upper trapezius features using muscular ultrasonography in female patients. These variables could be used for objective examination and provide guidelines for treatment plans in clinical settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  13. Validity evidence and reliability of a simulated patient feedback instrument

    PubMed Central

    2012-01-01

    Background In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients. PMID:22284898

  14. Compact high reliability fiber coupled laser diodes for avionics and related applications

    NASA Astrophysics Data System (ADS)

    Daniel, David R.; Richards, Gordon S.; Janssen, Adrian P.; Turley, Stephen E. H.; Stockton, Thomas E.

    1993-04-01

    This paper describes a newly developed compact high reliability fiber coupled laser diode which is capable of providing enhanced performance under extreme environmental conditions including a very wide operating temperature range. Careful choice of package materials to minimize thermal and mechanical stress, used with proven manufacturing methods, has resulted in highly stable coupling of the optical fiber pigtail to a high performance MOCVD-grown Multi-Quantum Well laser chip. Electro-optical characteristics over temperature are described together with a demonstration of device stability over a range of environmental conditions. Real time device lifetime data is also presented.

  15. Improvement of automatic control system for high-speed current collectors

    NASA Astrophysics Data System (ADS)

    Sidorov, O. A.; Goryunov, V. N.; Golubkov, A. S.

    2018-01-01

    The article considers the ways of regulation of pantographs to provide quality and reliability of current collection at high speeds. To assess impact of regulation was proposed integral criterion of the quality of current collection, taking into account efficiency and reliability of operation of the pantograph. The study was carried out using mathematical model of interaction of pantograph and catenary system, allowing to assess contact force and intensity of arcing at the contact zone at different movement speeds. The simulation results allowed us to estimate the efficiency of different methods of regulation of pantographs and determine the best option.

  16. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  17. Reducing Bolt Preload Variation with Angle-of-Twist Bolt Loading

    NASA Technical Reports Server (NTRS)

    Thompson, Bryce; Nayate, Pramod; Smith, Doug; McCool, Alex (Technical Monitor)

    2001-01-01

    Critical high-pressure sealing joints on the Space Shuttle reusable solid rocket motor require precise control of bolt preload to ensure proper joint function. As the reusable solid rocket motor experiences rapid internal pressurization, correct bolt preloads maintain the sealing capability and structural integrity of the hardware. The angle-of-twist process provides the right combination of preload accuracy, reliability, process control, and assembly-friendly design. It improves significantly over previous methods. The sophisticated angle-of-twist process controls have yielded answers to all discrepancies encountered while the simplicity of the root process has assured joint preload reliability.

  18. Methods of biological dosimetry employing chromosome-specific staining

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel

    2000-01-01

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyses. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods are provided to disable the hybridization capacity of shared, high copy repetitive sequences and/or remove such sequences to provide for useful contrast. Still further methods are provided to produce chromosome-specific staining reagents which are made specific to the targeted chromosomal material, which can be one or more whole chromosomes, one or more regions on one or more chromosomes, subsets of chromosomes and/or the entire genome. Probes and test kits are provided for use in tumor cytogenetics, in the detection of disease related loci, in analysis of structural abnormalities, such as translocations, and for biological dosimetry. Further, methods and prenatal test kits are provided to stain targeted chromosomal material of fetal cells, including fetal cells obtained from maternal blood. Still further, the invention provides for automated means to detect and analyse chromosomal abnormalities.

  19. Methods And Compositions For Chromosome-Specific Staining

    DOEpatents

    Gray, Joe W.; Pinkel, Daniel

    2003-08-19

    Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyses. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods are provided to disable the hybridization capacity of shared, high copy repetitive sequences and/or remove such sequences to provide for useful contrast. Still further methods are provided to produce chromosome-specific staining reagents which are made specific to the targeted chromosomal material, which can be one or more whole chromosomes, one or more regions on one or more chromosomes, subsets of chromosomes and/or the entire genome. Probes and test kits are provided for use in tumor cytogenetics, in the detection of disease related loci, in analysis of structural abnormalities, such as translocations, and for biological dosimetry. Further, methods and prenatal test kits are provided to stain targeted chromosomal material of fetal cells, including fetal cells obtained from maternal blood. Still further, the invention provides for automated means to detect and analyse chromosomal abnormalities.

  20. A New Method for the Evaluation and Prediction of Base Stealing Performance.

    PubMed

    Bricker, Joshua C; Bailey, Christopher A; Driggers, Austin R; McInnis, Timothy C; Alami, Arya

    2016-11-01

    Bricker, JC, Bailey, CA, Driggers, AR, McInnis, TC, and Alami, A. A new method for the evaluation and prediction of base stealing performance. J Strength Cond Res 30(11): 3044-3050, 2016-The purposes of this study were to evaluate a new method using electronic timing gates to monitor base stealing performance in terms of reliability, differences between it and traditional stopwatch-collected times, and its ability to predict base stealing performance. Twenty-five healthy collegiate baseball players performed maximal effort base stealing trials with a right and left-handed pitcher. An infrared electronic timing system was used to calculate the reaction time (RT) and total time (TT), whereas coaches' times (CT) were recorded with digital stopwatches. Reliability of the TGM was evaluated with intraclass correlation coefficients (ICCs) and coefficient of variation (CV). Differences between the TGM and traditional CT were calculated with paired samples t tests Cohen's d effect size estimates. Base stealing performance predictability of the TGM was evaluated with Pearson's bivariate correlations. Acceptable relative reliability was observed (ICCs 0.74-0.84). Absolute reliability measures were acceptable for TT (CVs = 4.4-4.8%), but measures were elevated for RT (CVs = 32.3-35.5%). Statistical and practical differences were found between TT and CT (right p = 0.00, d = 1.28 and left p = 0.00, d = 1.49). The TGM TT seems to be a decent predictor of base stealing performance (r = -0.49 to -0.61). The authors recommend using the TGM used in this investigation for athlete monitoring because it was found to be reliable, seems to be more precise than traditional CT measured with a stopwatch, provides an additional variable of value (RT), and may predict future performance.

Top