Sample records for obtain accurate reliable

  1. The Data Evaluation for Obtaining Accuracy and Reliability

    NASA Astrophysics Data System (ADS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-11-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  2. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  3. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  4. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  5. Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies

    PubMed Central

    Nguyen, Duc D.; Wang, Bao

    2017-01-01

    Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071

  6. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  7. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  8. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  9. The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.

    PubMed

    O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard

    2010-08-01

    The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.

  10. Accurate mass measurements and their appropriate use for reliable analyte identification.

    PubMed

    Godfrey, A Ruth; Brenton, A Gareth

    2012-09-01

    Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.

  11. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry.

    PubMed

    Fazio, Massimo A; Bruno, Luigi; Reynaud, Juan F; Poggialini, Andrea; Downs, J Crawford

    2012-03-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment.

  12. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  13. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions?

    PubMed

    Hassan, Shirin E

    2012-05-04

    The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.

  14. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  15. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    PubMed Central

    Hassan, Shirin E.

    2012-01-01

    Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593

  16. Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses

    PubMed Central

    Myers, Risa B.; Herskovic, Jorge R.

    2011-01-01

    Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our

  17. Reliability and Accuracy of Static Parameters Obtained From Ink and Pressure Platform Footprints.

    PubMed

    Zuil-Escobar, Juan Carlos; Martínez-Cepa, Carmen Belén; Martín-Urrialde, Jose Antonio; Gómez-Conesa, Antonia

    2016-09-01

    The purpose of this study was to evaluate the accuracy and the intrarater reliability of arch angle (AA), Staheli Index (SI), and Chippaux-Smirak Index (CSI) obtained from ink and pressure platform footprints. We obtained AA, SI, and CSI measurements from ink pedigraph footprints and pressure platform footprints in 40 healthy participants (aged 25.65 ± 5.187 years). Intrarater reliability was calculated for all parameters obtained using the 2 methods. Standard error of measurement and minimal detectable change were also calculated. A repeated-measure analysis of variance was used to identify differences between ink and pressure platform footprints. Intraclass correlation coefficient and Bland and Altman plots were used to assess similar parameters obtained using different methods. Intrarater reliability was >0.9 for all parameters and was slightly higher for the ink footprints. No statistical difference was reported in repeated-measure analysis of variance for any of the parameters. Intraclass correlation coefficient values from AA, SI, and CSI that were obtained using ink footprints and pressure platform footprints were excellent, ranging from 0.797 to 0.829. However, pressure platform overestimated AA and underestimated SI and CSI. Our study revealed that AA, SI, and CSI were similar regardless of whether the ink or pressure platform method was used. In addition, the parameters indicated high intrarater reliability and were reproducible. Copyright © 2016. Published by Elsevier Inc.

  18. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur

    PubMed Central

    Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E. J.; Shefelbine, S. J.; Hutchinson, J. R.

    2012-01-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  19. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  20. Development of a Method to Obtain More Accurate General and Oral Health Related Information Retrospectively

    PubMed Central

    A, Golkari; A, Sabokseir; D, Blane; A, Sheiham; RG, Watt

    2017-01-01

    Statement of Problem: Early childhood is a crucial period of life as it affects one’s future health. However, precise data on adverse events during this period is usually hard to access or collect, especially in developing countries. Objectives: This paper first reviews the existing methods for retrospective data collection in health and social sciences, and then introduces a new method/tool for obtaining more accurate general and oral health related information from early childhood retrospectively. Materials and Methods: The Early Childhood Events Life-Grid (ECEL) was developed to collect information on the type and time of health-related adverse events during the early years of life, by questioning the parents. The validity of ECEL and the accuracy of information obtained by this method were assessed in a pilot study and in a main study of 30 parents of 8 to 11 year old children from Shiraz (Iran). Responses obtained from parents using the final ECEL were compared with the recorded health insurance documents. Results: There was an almost perfect agreement between the health insurance and ECEL data sets (Kappa value=0.95 and p < 0.001). Interviewees remembered the important events more accurately (100% exact timing match in case of hospitalization). Conclusions: The Early Childhood Events Life-Grid method proved to be highly accurate when compared with recorded medical documents. PMID:28959773

  1. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  2. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  3. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  4. Reliability of skin biopsies in determining accurate tumor margins: a retrospective study after Mohs micrographic surgery.

    PubMed

    Koslosky, Cynthia Lynn; El Tal, Abdel Kader; Workman, Benjamin; Tamim, Hani; Durance, Michelle Christine; Mehregan, David Ali

    2014-09-01

    Skin biopsy reports of basal cell carcinoma and squamous cell carcinoma are often accompanied by comments on the margins. A physician's management can be influenced by such reports, particularly when the margins are reported as clear and no further interventions are pursued. To retrospectively review pathology margins on Mohs micrographic surgery (MMS) cases performed at a University Center and to compare biopsy margins with the Mohs margins found on the first stage. Data collection of 1,000 cases of Mohs surgery was obtained regarding margins on skin biopsy and compared with margins on the first stage of MMS. Overall, of the biopsies that showed only deep margin involvement, a lateral margin was seen on 32% of the first stages of MMS. Conversely, of the biopsies that showed only lateral margin involvement, a deep margin was seen on 14% of the first stages of MMS. Of the biopsies that showed clear margins, a margin was seen in 30% of the cases on the first stage of MMS. Skin biopsies processed through the "bread-loafing" technique are not reliable in detecting accurate margins, and therefore, a biopsy report should not include margin involvement within it.

  5. Portfolio assessment during medical internships: How to obtain a reliable and feasible assessment procedure?

    PubMed

    Michels, Nele R M; Driessen, Erik W; Muijtjens, Arno M M; Van Gaal, Luc F; Bossaert, Leo L; De Winter, Benedicte Y

    2009-12-01

    A portfolio is used to mentor and assess students' clinical performance at the workplace. However, students and raters often perceive the portfolio as a time-consuming instrument. In this study, we investigated whether assessment during medical internship by a portfolio can combine reliability and feasibility. The domain-oriented reliability of 61 double-rated portfolios was measured, using a generalisability analysis with portfolio tasks and raters as sources of variation in measuring the performance of a student. We obtained reliability (Phi coefficient) of 0.87 with this internship portfolio containing 15 double-rated tasks. The generalisability analysis showed that an acceptable level of reliability (Phi = 0.80) was maintained when the amount of portfolio tasks was decreased to 13 or 9 using one and two raters, respectively. Our study shows that a portfolio can be a reliable method for the assessment of workplace learning. The possibility of reducing the amount of tasks or raters while maintaining a sufficient level of reliability suggests an increase in feasibility of portfolio use for both students and raters.

  6. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  7. Guidelines and techniques for obtaining water samples that accurately represent the water chemistry of an aquifer

    USGS Publications Warehouse

    Claassen, Hans C.

    1982-01-01

    Obtaining ground-water samples that accurately represent the water chemistry of an aquifer is a complex task. Before a ground-water sampling program can be started, an understanding of the kind of chemical data needed and the potential changes in water chemistry resulting from various drilling, well-completion, and sampling techniques is needed. This report provides a basis for such an evaluation and permits a choice of techniques that will result in obtaining the best possible data for the time and money allocated.

  8. An accurate and reliable method of thermal data analysis in thermal imaging of the anterior knee for use in cryotherapy research.

    PubMed

    Selfe, James; Hardaker, Natalie; Thewlis, Dominic; Karki, Anna

    2006-12-01

    To develop an anatomic marker system (AMS) as an accurate, reliable method of thermal imaging data analysis, for use in cryotherapy research. Investigation of the accuracy of new thermal imaging technique. Hospital orthopedic outpatient department in England. Consecutive sample of 9 patients referred to anterior knee pain clinic. Not applicable. Thermally inert markers were placed at specific anatomic locations, defining an area over the anterior knee of patients with anterior knee pain. A baseline thermal image was taken. Patients underwent a 3-minute thermal washout of the affected knee. Thermal images were collected at a rate of 1 image per minute for a 20-minute re-warming period. A Matlab (version 7.0) program was written to digitize the marker positions and subsequently calculate the mean of the area over the anterior knee. Virtual markers were then defined as 15% distal from the proximal marker, 30% proximal from the distal markers, 15% lateral from the medial marker, and 15% medial from the lateral marker. The virtual markers formed an ellipse, which defined an area representative of the patella shape. Within the ellipse, the mean value of the full pixels determined the mean temperature of this region. Ten raters were recruited to use the program and interrater reliability was investigated. The intraclass correlation coefficient produced coefficients within acceptable bounds, ranging from .82 to .97, indicating adequate interrater reliability. The AMS provides an accurate, reliable method for thermal imaging data analysis and is a reliable tool with which to advance cryotherapy research.

  9. Is photometry an accurate and reliable method to assess boar semen concentration?

    PubMed

    Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C

    2011-02-01

    Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Dental measurements and Bolton index reliability and accuracy obtained from 2D digital, 3D segmented CBCT, and 3d intraoral laser scanner

    PubMed Central

    San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel

    2017-01-01

    Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764

  11. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  12. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  13. The need for accurate total cholesterol measurement. Recommended analytical goals, current state of reliability, and guidelines for better determinations.

    PubMed

    Naito, H K

    1989-03-01

    We have approached a dawn of a new era in detection, evaluation, treatment, and monitoring of individuals with elevated blood cholesterol levels who are at increased risk for CHD. The NHLBI's National Cholesterol Education Program will be the major force underlying this national awareness program, which is dependent on the clinical laboratories providing reliable data. Precision or reproducibility of results is not a problem for most of the laboratories, but accuracy is a major concern. Both the manufacturers and laboratorians need to standardize the measurement for cholesterol so that the accuracy base is traceable to the NCCLS NRS/CHOL. The manufacturers need to adopt a uniform policy that will ensure that the values assigned to calibration, quality control, and quality assurance or survey materials are accurate and traceable to the NCCLS/CHOL. Since, at present, there are some limitations of these materials caused by matrix effects, laboratories are encouraged to use the CDC-NHLBI National Reference Laboratory Network to evaluate and monitor their ability to measure patient blood cholesterol levels accurately. Major areas of analytical problems are identified and general, as well as specific, recommendations are provided to help ensure reliable measurement of cholesterol in patient specimens.

  14. Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?

    PubMed

    Van Loan, Marta D

    2007-12-01

    This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.

  15. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  16. Latest Developments on Obtaining Accurate Measurements with Pitot Tubes in ZPG Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Nagib, Hassan; Vinuesa, Ricardo

    2013-11-01

    Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.

  17. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    NASA Astrophysics Data System (ADS)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  18. Validity and reliability of the abdominal test and evaluation systems tool (ABTEST) to accurately measure abdominal force.

    PubMed

    Glenn, Jordan M; Galey, Madeline; Edwards, Abigail; Rickert, Bradley; Washington, Tyrone A

    2015-07-01

    Ability to generate force from the core musculature is a critical factor for sports and general activities with insufficiencies predisposing individuals to injury. This study evaluated isometric force production as a valid and reliable method of assessing abdominal force using the abdominal test and evaluation systems tool (ABTEST). Secondary analysis estimated 1-repetition maximum on commercially available abdominal machine compared to maximum force and average power on ABTEST system. This study utilized test-retest reliability and comparative analysis for validity. Reliability was measured using test-retest design on ABTEST. Validity was measured via comparison to estimated 1-repetition maximum on a commercially available abdominal device. Participants applied isometric, abdominal force against a transducer and muscular activation was evaluated measuring normalized electromyographic activity at the rectus-abdominus, rectus-femoris, and erector-spinae. Test, re-test force production on ABTEST was significantly correlated (r=0.84; p<0.001). Mean electromyographic activity for the rectus-abdominus (72.93% and 75.66%), rectus-femoris (6.59% and 6.51%), and erector-spinae (6.82% and 5.48%) were observed for trial-1 and trial-2, respectively. Significant correlations for the estimated 1-repetition maximum were found for average power (r=0.70, p=0.002) and maximum force (r=0.72, p<0.001). Data indicate the ABTEST can accurately measure rectus-abdominus force isolated from hip-flexor involvement. Negligible activation of erector-spinae substantiates little subjective effort among participants in the lower back. Results suggest ABTEST is a valid and reliable method of evaluating abdominal force. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  19. Obtaining reliable phase-gradient delays from otoacoustic emission data.

    PubMed

    Shera, Christopher A; Bergevin, Christopher

    2012-08-01

    Reflection-source otoacoustic emission phase-gradient delays are widely used to obtain noninvasive estimates of cochlear function and properties, such as the sharpness of mechanical tuning and its variation along the length of the cochlear partition. Although different data-processing strategies are known to yield different delay estimates and trends, their relative reliability has not been established. This paper uses in silico experiments to evaluate six methods for extracting delay trends from reflection-source otoacoustic emissions (OAEs). The six methods include both previously published procedures (e.g., phase smoothing, energy-weighting, data exclusion based on signal-to-noise ratio) and novel strategies (e.g., peak-picking, all-pass factorization). Although some of the methods perform well (e.g., peak-picking), others introduce substantial bias (e.g., phase smoothing) and are not recommended. In addition, since standing waves caused by multiple internal reflection can complicate the interpretation and compromise the application of OAE delays, this paper develops and evaluates two promising signal-processing strategies, the first based on time-frequency filtering using the continuous wavelet transform and the second on cepstral analysis, for separating the direct emission from its subsequent reflections. Altogether, the results help to resolve previous disagreements about the frequency dependence of human OAE delays and the sharpness of cochlear tuning while providing useful analysis methods for future studies.

  20. Reliability Driven Space Logistics Demand Analysis

    NASA Technical Reports Server (NTRS)

    Knezevic, J.

    1995-01-01

    Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.

  1. Circumferential finger measurements utilizing a torque meter to increase reliability.

    PubMed

    King, T I

    1993-01-01

    The purpose of this study was to compare the reliabilities of two methods of measuring finger circumference. Traditionally, finger circumference is determined clinically by the use of a tape measure. In this study, a tape-measure device for recording finger circumference utilizing a torque meter was compared with the traditional method to determine reliability differences. Ninety-two occupational therapists and occupational therapy students obtained circumferential measurements of the author's left index finger at the middle of the proximal phalanx utilizing the two methods. The readings obtained for each method were analyzed to determine the coefficient of variation and to compare their variances. The coefficient of variation for the traditional method was 2.92 and for the device utilizing the torque meter was 0.75. The F ratio was 15.63, which is significant at the 0.01 level. The results of this study indicate greater interrater reliability using a device that can accurately measure torque and allow the therapist to control the amount of tension applied when obtaining circumferential measurements using a tape measure.

  2. Reliability of Scores Obtained from Self-, Peer-, and Teacher-Assessments on Teaching Materials Prepared by Teacher Candidates

    ERIC Educational Resources Information Center

    Nalbantoglu Yilmaz, Funda

    2017-01-01

    This study aims to determine the reliability of scores obtained from self-, peer-, and teacher-assessments in terms of teaching materials prepared by teacher candidates. The study group of this research constitutes 56 teacher candidates. In the scope of research, teacher candidates were asked to develop teaching material related to their study.…

  3. Reliability of cystometrically obtained intravesical pressures in patients with neurogenic bladders.

    PubMed

    Hess, Marika J; Lim, Lance; Yalla, Subbarao V

    2002-01-01

    Urodynamic studies in patients with neurogenic bladder detect and categorize neurourodynamic states, identify the risk for urologic sequelae, and determine the necessity for interventions. Because urodynamic studies serves as a prognostic indicator and guides patient management, pressure measurements during the study must accurately represent bladder function under physiologic conditions. Because nonphysiologic bladder filling used during conventional urodynamic studies may alter the bladder's accommodative properties, we studied how closely the intravesical pressures obtained before filling cystometry resembled those obtained during the filling phase of the cystometrogram. Twenty-two patients (21 men, 1 woman) with neurogenic bladders underwent standard urodynamic studies. A 16F triple-lumen catheter was inserted into the bladder, and the intravesical pressures were recorded (physiologic volume-specific pressures, PVSP). After emptying the bladder, an equal volume of normal saline solution was reinfused, and the pressures were recorded again (cystometric volume-specific pressure, CVSP). All patients underwent routine fluoroscopically assisted urodynamic testing. The PVSP and the CVSP were compared using the Wilcoxon signed ranks test. P value of .05 was significant. The mean PVSP was 14.5 cmH2O (range, 4-42 cmH2O) and mean CVSP was 20.6 cmH2O (range, 6-70 cmH2O). The CVSP was significantly higher than the PVSP (P = .01). Filling pressures during cystometry (CVSP) were significantly higher than the pressures measured at rest (PVSP). This study also suggests a strong correlation between PVSP and CVSP.

  4. Reliability and validity of pendulum test measures of spasticity obtained with the Polhemus tracking system from patients with chronic stroke.

    PubMed

    Bohannon, Richard W; Harrison, Steven; Kinsella-Shaw, Jeffrey

    2009-07-30

    Spasticity is a common impairment accompanying stroke. Spasticity of the quadriceps femoris muscle can be quantified using the pendulum test. The measurement properties of pendular kinematics captured using a magnetic tracking system has not been studied among patients who have experienced a stroke. Therefore, this study describes the test-retest reliability and known groups and convergent validity of the pendulum test measures obtained with the Polhemus tracking system. Eight patients with chronic stroke underwent pendulum tests with their affected and unaffected lower limbs, with and without the addition of a 2.2 kg cuff weight at the ankle, using the Polhemus magnetic tracking system. Also measured bilaterally were knee resting angles, Ashworth scores (grades 0-4) of quadriceps femoris muscles, patellar tendon (knee jerk) reflexes (grades 0-4), and isometric knee extension force. Three measures obtained from pendular traces of the affected side were reliable (intraclass correlation coefficient > or = .844). Known groups validity was confirmed by demonstration of a significant difference in the measurements between sides. Convergent validity was supported by correlations > or = .57 between pendulum test measures and other measures reflective of spasticity. Pendulum test measures obtained with the Polhemus tracking system from the affected side of patients with stroke have good test-retest reliability and both known groups and convergent validity.

  5. Reliability of the measures of weight-bearing distribution obtained during quiet stance by digital scales in subjects with and without hemiparesis.

    PubMed

    de Araujo-Barbosa, Paulo Henrique Ferreira; de Menezes, Lidiane Teles; Costa, Abraão Souza; Couto Paz, Clarissa Cardoso Dos Santos; Fachin-Martins, Emerson

    2015-05-01

    Described as an alternative way of assessing weight-bearing asymmetries, the measures obtained from digital scales have been used as an index to classify weight-bearing distribution. This study aimed to describe the intra-test and the test/retest reliability of measures in subjects with and without hemiparesis during quiet stance. The percentage of body weight borne by one limb was calculated for a sample of subjects with hemiparesis and for a control group that was matched by gender and age. A two-way analysis of variance was used to verify the intra-test reliability. This analysis was calculated using the differences between the averages of the measures obtained during single, double or triple trials. The intra-class correlation coefficient (ICC) was utilized and data plotted using the Bland-Altman method. The intra-test analysis showed significant differences, only observed in the hemiparesis group, between the measures obtained by single and triple trials. Excellent and moderate ICC values (0.69-0.84) between test and retest were observed in the hemiparesis group, while for control groups ICC values (0.41-0.74) were classified as moderate, progressing from almost poor for measures obtained by a single trial to almost excellent for those obtained by triple trials. In conclusion, good reliability ranging from moderate to excellent classifications was found for participants with and without hemiparesis. Moreover, an improvement of the repeatability was observed with fewer trials for participants with hemiparesis, and with more trials for participants without hemiparesis.

  6. Timeline historical review of income and financial transactions: a reliable assessment of personal finances.

    PubMed

    Black, Anne C; Serowik, Kristin L; Ablondi, Karen M; Rosen, Marc I

    2013-01-01

    The need for accurate and reliable information about income and resources available to individuals with psychiatric disabilities is critical for the assessment of need and evaluation of programs designed to alleviate financial hardship or affect finance allocation. Measurement of finances is ubiquitous in studies of economics, poverty, and social services. However, evidence has demonstrated that these measures often contain error. We compare the 1-week test-retest reliability of income and finance data from 24 adult psychiatric outpatients using assessment-as-usual (AAU) and a new instrument, the Timeline Historical Review of Income and Financial Transactions (THRIFT). Reliability estimates obtained with the THRIFT for Income (0.77), Expenses (0.91), and Debt (0.99) domains were significantly better than those obtained with AAU. Reliability estimates for Balance did not differ. THRIFT reduced measurement error and provided more reliable information than AAU for assessment of personal finances in psychiatric patients receiving Social Security benefits. The instrument also may be useful with other low-income groups.

  7. A simple and reliable sensor for accurate measurement of angular speed for low speed rotating machinery

    NASA Astrophysics Data System (ADS)

    Kuosheng, Jiang; Guanghua, Xu; Tangfei, Tao; Lin, Liang; Yi, Wang; Sicong, Zhang; Ailing, Luo

    2014-01-01

    This paper presents the theory and implementation of a novel sensor system for measuring the angular speed (AS) of a shaft rotating at a very low speed range, nearly zero speed. The sensor system consists mainly of an eccentric sleeve rotating with the shaft on which the angular speed to be measured, and an eddy current displacement sensor to obtain the profile of the sleeve for AS calculation. When the shaft rotates at constant speed the profile will be a pure sinusoidal trace. However, the profile will be a phase modulated signal when the shaft speed is varied. By applying a demodulating procedure, the AS can be obtained in a straightforward manner. The sensor system was validated experimentally based on a gearbox test rig and the result shows that the AS obtained are consistent with that obtained by a conventional encoder. However, the new sensor gives very smooth and stable traces of the AS, demonstrating its higher accuracy and reliability in obtaining the AS of the low speed operations with speed-up and down transients. In addition, the experiment also shows that it is easy and cost-effective to be realised in different applications such as condition monitoring and process control.

  8. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Reliability and validity of pendulum test measures of spasticity obtained with the Polhemus tracking system from patients with chronic stroke

    PubMed Central

    Bohannon, Richard W; Harrison, Steven; Kinsella-Shaw, Jeffrey

    2009-01-01

    Background Spasticity is a common impairment accompanying stroke. Spasticity of the quadriceps femoris muscle can be quantified using the pendulum test. The measurement properties of pendular kinematics captured using a magnetic tracking system has not been studied among patients who have experienced a stroke. Therefore, this study describes the test-retest reliability and known groups and convergent validity of the pendulum test measures obtained with the Polhemus tracking system. Methods Eight patients with chronic stroke underwent pendulum tests with their affected and unaffected lower limbs, with and without the addition of a 2.2 kg cuff weight at the ankle, using the Polhemus magnetic tracking system. Also measured bilaterally were knee resting angles, Ashworth scores (grades 0–4) of quadriceps femoris muscles, patellar tendon (knee jerk) reflexes (grades 0–4), and isometric knee extension force. Results Three measures obtained from pendular traces of the affected side were reliable (intraclass correlation coefficient ≥ .844). Known groups validity was confirmed by demonstration of a significant difference in the measurements between sides. Convergent validity was supported by correlations ≥ .57 between pendulum test measures and other measures reflective of spasticity. Conclusion Pendulum test measures obtained with the Polhemus tracking system from the affected side of patients with stroke have good test-retest reliability and both known groups and convergent validity. PMID:19642989

  10. Accuracy and reliability of observational gait analysis data: judgments of push-off in gait after stroke.

    PubMed

    McGinley, Jennifer L; Goldie, Patricia A; Greenwood, Kenneth M; Olney, Sandra J

    2003-02-01

    Physical therapists routinely observe gait in clinical practice. The purpose of this study was to determine the accuracy and reliability of observational assessments of push-off in gait after stroke. Eighteen physical therapists and 11 subjects with hemiplegia following a stroke participated in the study. Measurements of ankle power generation were obtained from subjects following stroke using a gait analysis system. Concurrent videotaped gait performances were observed by the physical therapists on 2 occasions. Ankle power generation at push-off was scored as either normal or abnormal using two 11-point rating scales. These observational ratings were correlated with the measurements of peak ankle power generation. A high correlation was obtained between the observational ratings and the measurements of ankle power generation (mean Pearson r=.84). Interobserver reliability was moderately high (mean intraclass correlation coefficient [ICC (2,1)]=.76). Intraobserver reliability also was high, with a mean ICC (2,1) of.89 obtained. Physical therapists were able to make accurate and reliable judgments of push-off in videotaped gait of subjects following stroke using observational assessment. Further research is indicated to explore the accuracy and reliability of data obtained with observational gait analysis as it occurs in clinical practice.

  11. A polarized light microscopy method for accurate and reliable grading of collagen organization in cartilage repair.

    PubMed

    Changoor, A; Tran-Khanh, N; Méthot, S; Garon, M; Hurtig, M B; Shive, M S; Buschmann, M D

    2011-01-01

    Collagen organization, a feature that is critical for cartilage load bearing and durability, is not adequately assessed in cartilage repair tissue by present histological scoring systems. Our objectives were to develop a new polarized light microscopy (PLM) score for collagen organization and to test its reliability. This PLM score uses an ordinal scale of 0-5 to rate the extent that collagen network organization resembles that of young adult hyaline articular cartilage (score of 5) vs a totally disorganized tissue (score of 0). Inter-reader reliability was assessed using Intraclass Correlation Coefficients (ICC) for Agreement, calculated from scores of three trained readers who independently evaluated blinded sections obtained from normal (n=4), degraded (n=2) and repair (n=22) human cartilage biopsies. The PLM score succeeded in distinguishing normal, degraded and repair cartilages, where the latter displayed greater complexity in collagen structure. Excellent inter-reader reproducibility was found with ICCs for Agreement of 0.90 [ICC(2,1)] (lower boundary of the 95% confidence interval is 0.83) and 0.96 [ICC(2,3)] (lower boundary of the 95% confidence interval is 0.94), indicating the reliability of a single reader's scores and the mean of all three readers' scores, respectively. This PLM method offers a novel means for systematically evaluating collagen organization in repair cartilage. We propose that it be used to supplement current gold standard histological scoring systems for a more complete assessment of repair tissue quality. Copyright © 2010 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  12. Glucose Meters: A Review of Technical Challenges to Obtaining Accurate Results

    PubMed Central

    Tonyushkina, Ksenia; Nichols, James H.

    2009-01-01

    , anemia, hypotension, and other disease states. This article reviews the challenges involved in obtaining accurate glucose meter results. PMID:20144348

  13. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  14. Third-Order Incremental Dual-Basis Set Zero-Buffer Approach: An Accurate and Efficient Way To Obtain CCSD and CCSD(T) Energies.

    PubMed

    Zhang, Jun; Dolg, Michael

    2013-07-09

    An efficient way to obtain accurate CCSD and CCSD(T) energies for large systems, i.e., the third-order incremental dual-basis set zero-buffer approach (inc3-db-B0), has been developed and tested. This approach combines the powerful incremental scheme with the dual-basis set method, and along with the new proposed K-means clustering (KM) method and zero-buffer (B0) approximation, can obtain very accurate absolute and relative energies efficiently. We tested the approach for 10 systems of different chemical nature, i.e., intermolecular interactions including hydrogen bonding, dispersion interaction, and halogen bonding; an intramolecular rearrangement reaction; aliphatic and conjugated hydrocarbon chains; three compact covalent molecules; and a water cluster. The results show that the errors for relative energies are <1.94 kJ/mol (or 0.46 kcal/mol), for absolute energies of <0.0026 hartree. By parallelization, our approach can be applied to molecules of more than 30 atoms and more than 100 correlated electrons with high-quality basis set such as cc-pVDZ or cc-pVTZ, saving computational cost by a factor of more than 10-20, compared to traditional implementation. The physical reasons of the success of the inc3-db-B0 approach are also analyzed.

  15. A Bayesian-Based EDA Tool for Nano-circuits Reliability Calculations

    NASA Astrophysics Data System (ADS)

    Ibrahim, Walid; Beiu, Valeriu

    As the sizes of (nano-)devices are aggressively scaled deep into the nanometer range, the design and manufacturing of future (nano-)circuits will become extremely complex and inevitably will introduce more defects while their functioning will be adversely affected by transient faults. Therefore, accurately calculating the reliability of future designs will become a very important aspect for (nano-)circuit designers as they investigate several design alternatives to optimize the trade-offs between the conflicting metrics of area-power-energy-delay versus reliability. This paper introduces a novel generic technique for the accurate calculation of the reliability of future nano-circuits. Our aim is to provide both educational and research institutions (as well as the semiconductor industry at a later stage) with an accurate and easy to use tool for closely comparing the reliability of different design alternatives, and for being able to easily select the design that best fits a set of given (design) constraints. Moreover, the reliability model generated by the tool should empower designers with the unique opportunity of understanding the influence individual gates play on the design’s overall reliability, and identifying those (few) gates which impact the design’s reliability most significantly.

  16. Accurate structure, thermodynamics and spectroscopy of medium-sized radicals by hybrid Coupled Cluster/Density Functional Theory approaches: the case of phenyl radical

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Egidi, Franco; Puzzarini, Cristina

    2015-01-01

    The CCSD(T) model coupled with extrapolation to the complete basis-set limit and additive approaches represents the “golden standard” for the structural and spectroscopic characterization of building blocks of biomolecules and nanosystems. However, when open-shell systems are considered, additional problems related to both specific computational difficulties and the need of obtaining spin-dependent properties appear. In this contribution, we present a comprehensive study of the molecular structure and spectroscopic (IR, Raman, EPR) properties of the phenyl radical with the aim of validating an accurate computational protocol able to deal with conjugated open-shell species. We succeeded in obtaining reliable and accurate results, thus confirming and, partly, extending the available experimental data. The main issue to be pointed out is the need of going beyond the CCSD(T) level by including a full treatment of triple excitations in order to fulfil the accuracy requirements. On the other hand, the reliability of density functional theory in properly treating open-shell systems has been further confirmed. PMID:23802956

  17. Photogrammetry: an accurate and reliable tool to detect thoracic musculoskeletal abnormalities in preterm infants.

    PubMed

    Davidson, Josy; dos Santos, Amelia Miyashiro N; Garcia, Kessey Maria B; Yi, Liu C; João, Priscila C; Miyoshi, Milton H; Goulart, Ana Lucia

    2012-09-01

    To analyse the accuracy and reproducibility of photogrammetry in detecting thoracic abnormalities in infants born prematurely. Cross-sectional study. The Premature Clinic at the Federal University of São Paolo. Fifty-eight infants born prematurely in their first year of life. Measurement of the manubrium/acromion/trapezius angle (degrees) and the deepest thoracic retraction (cm). Digitised photographs were analysed by two blinded physiotherapists using a computer program (SAPO; http://SAPO.incubadora.fapesp.br) to detect shoulder elevation and thoracic retraction. Physical examinations performed independently by two physiotherapists were used to assess the accuracy of the new tool. Thoracic alterations were detected in 39 (67%) and in 40 (69%) infants by Physiotherapists 1 and 2, respectively (kappa coefficient=0.80). Using a receiver operating characteristic curve, measurement of the manubrium/acromion/trapezius angle and the deepest thoracic retraction indicated accuracy of 0.79 and 0.91, respectively. For measurement of the manubrium/acromion/trapezius angle, the Bland and Altman limits of agreement were -6.22 to 7.22° [mean difference (d)=0.5] for repeated measures by one physiotherapist, and -5.29 to 5.79° (d=0.75) between two physiotherapists. For thoracic retraction, the intra-rater limits of agreement were -0.14 to 0.18cm (d=0.02) and the inter-rater limits of agreement were -0.20 to -0.17cm (d=0.02). SAPO provided an accurate and reliable tool for the detection of thoracic abnormalities in preterm infants. Copyright © 2011 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  18. Reliability of Leg and Vertical Stiffness During High Speed Treadmill Running.

    PubMed

    Pappas, Panagiotis; Dallas, Giorgos; Paradisis, Giorgos

    2017-04-01

    In research, the accurate and reliable measurement of leg and vertical stiffness could contribute to valid interpretations. The current study aimed at determining the intraparticipant variability (ie, intraday and interday reliabilities) of leg and vertical stiffness, as well as related parameters, during high speed treadmill running, using the "sine-wave" method. Thirty-one males ran on a treadmill at 6.67 m∙s -1 , and the contact and flight times were measured. To determine the intraday reliability, three 10-s running bouts with 10-min recovery were performed. In addition, to examine the interday reliability, three 10-s running bouts on 3 separate days with 48-h interbout intervals were performed. The reliability statistics included repeated-measure analysis of variance, average intertrial correlations, intraclass correlation coefficients (ICCs), Cronbach's α reliability coefficient, and the coefficient of variation (CV%). Both intraday and interday reliabilities were high for leg and vertical stiffness (ICC > 0.939 and CV < 4.3%), as well as related variables (ICC > 0.934 and CV < 3.9%). It was thus inferred that the measurements of leg and vertical stiffness, as well as the related parameters obtained using the "sine-wave" method during treadmill running at 6.67 m∙s -1 , were highly reliable, both within and across days.

  19. Reproducibility and interoperator reliability of obtaining images and measurements of the cervix and uterus with brachytherapy treatment applicators in situ using transabdominal ultrasound.

    PubMed

    van Dyk, Sylvia; Garth, Margaret; Oates, Amanda; Kondalsamy-Chennakesavan, Srinivas; Schneider, Michal; Bernshaw, David; Narayan, Kailash

    2016-01-01

    To validate interoperator reliability of brachytherapy radiation therapists (RTs) in obtaining an ultrasound image and measuring the cervix and uterine dimensions using transabdominal ultrasound. Patients who underwent MRI with applicators in situ after the first insertion were included in the study. Imaging was performed by three RTs (RT1, RT2, and RT3) with varying degrees of ultrasound experience. All RTs were required to obtain a longitudinal planning image depicting the applicator in the uterine canal and measure the cervix and uterus. The MRI scan, taken 1 hour after the ultrasound, was used as the reference standard against which all measurements were compared. Measurements were analyzed with intraclass correlation coefficient and Bland-Altman plots. All RTs were able to obtain a suitable longitudinal image for each patient in the study. Mean differences (SD) between MRI and ultrasound measurements obtained by RTs ranged from 3.5 (3.6) to 4.4 (4.23) mm and 0 (3.0) to 0.9 (2.5) mm on the anterior and posterior surface of the cervix, respectively. Intraclass correlation coefficient for absolute agreement between MRI and RTs was >0.9 for all posterior measurement points in the cervix and ranged from 0.41 to 0.92 on the anterior surface. Measurements were not statistically different between RTs at any measurement point. RTs with variable training attained high levels of interoperator reliability when using transabdominal ultrasound to obtain images and measurements of the uterus and cervix with brachytherapy applicators in situ. Access to training and use of a well-defined protocol assist in achieving these high levels of reliability. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  20. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  1. A Bayesian modification to the Jelinski-Moranda software reliability growth model

    NASA Technical Reports Server (NTRS)

    Littlewood, B.; Sofer, A.

    1983-01-01

    The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.

  2. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  3. Toward Obtaining Reliable Particulate Air Quality Information from Satellites

    NASA Astrophysics Data System (ADS)

    Strawa, A. W.; Chatfield, R. B.; Legg, M.; Esswein, R.; Justice, E.

    2009-12-01

    Air quality agencies use ground sites to monitor air quality, providing accurate information at particular points. Using measurements from satellite imagery has the potential to provide air quality information in a timely manner with better spatial resolution and at a lower cost that can also useful for model validation. While previous studies show acceptable correlations between Aerosol Optical Depth (AOD) derived from MODIS and surface Particulate Matter (PM) measurements on the eastern US, the data do not correlate well in the western US (Al-Saadi et al., 2005; Engle-Cox et al., 2004) . This paper seeks to improve the AOD-PM correlations by using advanced statistical analysis techniques. Our study area is the San Joaquin Valley in California because air quality in this region has failed to meet state and federal attainment standards for PM for the past several years. A previous investigation found good correlation of the AOD values between MODIS, MISR and AERONET, but poor correlations (R2 ~ 0.02) between satellite-based AOD and surface PM2.5 measurements. PM2.5 measurements correlated somewhat better (R2 ~ 0.18) with MODIS-derived AOD using the Deep Blue surface reflectance algorithm (Hsu et al., 2006) rather than the standard MODIS algorithm. This level of correlation is not adequate for reliable air quality measurements. Pelletier et al. (2007) used generalized additive models (GAMs) and meteorological data to improve the correlation between PM and AERONET AOD in western Europe. Additive models are more flexible than linear models and the functional relationships can be plotted to give a sense of the relationship between the predictor and the response. In this paper we use GAMs to improve surface PM2.5 to MODIS-AOD correlations. For example, we achieve an R2 ~ 0.44 using a GAM that includes the Deep Blue AOD, and day of year as parameters. Including NOx observations, improves the R2 ~ 0.64. Surprisingly Ångström exponent did not prove to be a significant

  4. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    NASA Astrophysics Data System (ADS)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  5. Reliability of tristimulus colourimetry in the assessment of cutaneous bruise colour.

    PubMed

    Scafide, Katherine N; Sheridan, Daniel J; Taylor, Laura A; Hayat, Matthew J

    2016-06-01

    Bruising is one of the most common types of injury clinicians observe among victims of violence and other trauma patients. However, research has shown commonly used qualitative description of cutaneous bruise colour via the naked eye is subjective and unreliable. No published work has formally evaluated the reliability of tristimulus colourimetry as an alternative for assessing bruise colour, despite its clinical and research applications in accurately assessing skin colour. The purpose of this study was to systematically evaluate the test-retest and inter-observer reliability of tristimulus colourimetry in the assessment of cutaneous bruise colour. Two researchers obtained repeated tristimulus colourimetry measures of cutaneous bruises with participants of diverse skin colour. Measures were obtained using the Minolta CR-400 Chomameter. Commission Internationale d'Eclairage (CIE) L*a*b* colour space was used. Data was analysed using intraclass correlation coefficients (ICC), Cronbach's alpha, and minimal detectable change (MDC) on all three L*a*b* values. The colorimeter demonstrated excellent test-retest or intra-rater reliability (L* ICC=0.999; a* ICC=0.973; b* ICC=0.892) and inter-rater reliability (L* ICC=0.997; a* ICC=0.976; b* ICC=0.982). With consistent placement, the tristimulus colourimetry is reliable for the objective assessment and documentation of cutaneous bruise colour for purposes of clinical practice and research. Recommendations for use in practice/research are provided. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Reliability of the nursing care hour measure: a descriptive study.

    PubMed

    Klaus, Susan F; Dunton, Nancy; Gajewski, Byron; Potter, Catima

    2013-07-01

    The nursing care hour has become an international standard unit of measure in research where nurse staffing is a key variable. Until now, there have been no studies verifying whether nursing care hours obtained from hospital data sources can be collected reliably. To examine the processes used by hospitals to generate nursing care hour data and to evaluate inter-rater reliability and guideline compliance with standards of the National Database of Nursing Quality Indicators(®) (NDNQI(®)) and the National Quality Forum. Two-phase descriptive study of all NDNQI hospitals that submitted data in third quarter of 2007. Data for phase I came from an online survey created by the authors to ascertain the processes used by hospitals to collect nursing care hours and their compliance with standardized data collection guidelines. In phase II, inter-rater reliability was measured using intra-class correlations between nursing care hours generated from clock hour files submitted to the study team by participants' payroll/accounting departments and aggregated data submitted previously. Phase I data were obtained from a total of 714 respondents. Nearly half (48%) of all sites use payroll records to obtain nursing care hour data and 70% use one of the standardized methods for converting the bi-weekly hours into months. Unit secretaries were reportedly included in NCH by 17.4% of respondents and only 26.2% of sites could accurately identify the point at which newly hired nurses should be included. The phase II findings (n=11) support the ability of two independent raters to obtain similar results when calculating total nursing care hours according to standard guidelines (ICC=0.76-0.99). Although barriers exist, this study found support for hospitals' abilities to collect reliable nursing care hour data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Accurate derivation of heart rate variability signal for detection of sleep disordered breathing in children.

    PubMed

    Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S

    2004-01-01

    The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.

  8. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  9. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  10. Establishing inter-rater reliability scoring in a state trauma system.

    PubMed

    Read-Allsopp, Christine

    2004-01-01

    Trauma systems rely on accurate Injury Severity Scoring (ISS) to describe trauma patient populations. Twenty-seven (27) Trauma Nurse Coordinators and Data Managers across the state of New South Wales, Australia trauma network were instructed in the uses and techniques of the Abbreviated Injury Scale (AIS) from the Association for the Advancement of Automotive Medicine. The aim is to provide accurate, reliable and valid data for the state trauma network. Four (4) months after the course a coding exercise was conducted to assess inter-rater reliability. The results show that inter-rater reliability is with accepted international standards.

  11. Towards more accurate and reliable predictions for nuclear applications

    NASA Astrophysics Data System (ADS)

    Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François

    2017-09-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.

  12. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  13. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  14. Is computed tomography an accurate and reliable method for measuring total knee arthroplasty component rotation?

    PubMed

    Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario

    2016-04-01

    Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.

  15. Is self-reported height or arm span a more accurate alternative measure of height?

    PubMed

    Brown, Jean K; Feng, Jui-Ying; Knapp, Thomas R

    2002-11-01

    The purpose of this study was to determine whether self-reported height or arm span is the more accurate alternative measure of height. A sample of 409 people between the ages of 19 and 67 (M = 35.0) participated in this anthropometric study. Height, self-reported height, and arm span were measured by 82 nursing research students. Mean differences from criterion measures were 0.17 cm for the measuring rules, 0.47 cm for arm span, and 0.85 cm and 0.87 cm for heights. Test-retest reliability was r = .997 for both height and arm span. The relationships of height to self-reported height and arm span were r = .97 and .90, respectively. Mean absolute differences were 1.80 cm and 4.29 cm, respectively. These findings support the practice of using self-reported height as an alternative measure of measured height in clinical settings, but arm span is an accurate alternative when neither measured height nor self-reported height is obtainable.

  16. As reliable as the sun

    NASA Astrophysics Data System (ADS)

    Leijtens, J. A. P.

    2017-11-01

    Fortunately there is almost nothing as reliable as the sun which can consequently be utilized as a very reliable source of spacecraft power. In order to harvest this power, the solar panels have to be pointed towards the sun as accurately and reliably as possible. To this extend, sunsensors are available on almost every satellite to support vital sun-pointing capability throughout the mission, even in the deployment and save mode phases of the satellites life. Given the criticality of the application one would expect that after more than 50 years of sun sensor utilisation, such sensors would be fully matured and optimised. In actual fact though, the majority of sunsensors employed are still coarse sunsensors which have a proven extreme reliability but present major issues regarding albedo sensitivity and pointing accuracy.

  17. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi.

    PubMed

    Oliveira, Tássia Boeno de; Azevedo Peixoto, Leonardo de; Teodoro, Paulo Eduardo; Alvarenga, Amauri Alves de; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability.

  18. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi

    PubMed Central

    de Oliveira, Tássia Boeno; Teodoro, Paulo Eduardo; de Alvarenga, Amauri Alves; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability. PMID:29438380

  19. Measure of Truck Delay and Reliability at the Corridor Level

    DOT National Transportation Integrated Search

    2018-04-01

    Freight transportation provides a significant contribution to our nations economy. A reliable and accessible freight network enables business in the Twin Cities to be more competitive in the Upper Midwest region. Accurate and reliable freight data...

  20. Is One Trial Sufficient to Obtain Excellent Pressure Pain Threshold Reliability in the Low Back of Asymptomatic Individuals? A Test-Retest Study.

    PubMed

    Balaguier, Romain; Madeleine, Pascal; Vuillerme, Nicolas

    2016-01-01

    - and inter-session. Reliable measurements can be equally achieved when using the mean of two or three consecutive PPT measurements, as usually proposed in the literature, or with only the first one. Although reliability was almost perfect regardless of the conducted comparison between PPT assessments, our results suggest using two consecutive measurements to obtain higher short term absolute reliability.

  1. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  2. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  3. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind

  4. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  5. How reliable and accurate is the AO/OTA comprehensive classification for adult long-bone fractures?

    PubMed

    Meling, Terje; Harboe, Knut; Enoksen, Cathrine H; Aarflot, Morten; Arthursson, Astvaldur J; Søreide, Kjetil

    2012-07-01

    Reliable classification of fractures is important for treatment allocation and study comparisons. The overall accuracy of scoring applied to a general population of fractures is little known. This study aimed to investigate the accuracy and reliability of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for adult long-bone fractures and identify factors associated with poor coding agreement. Adults (>16 years) with long-bone fractures coded in a Fracture and Dislocation Registry at the Stavanger University Hospital during the fiscal year 2008 were included. An unblinded reference code dataset was generated for the overall accuracy assessment by two experienced orthopedic trauma surgeons. Blinded analysis of intrarater reliability was performed by rescoring and of interrater reliability by recoding of a randomly selected fracture sample. Proportion of agreement (PA) and kappa (κ) statistics are presented. Uni- and multivariate logistic regression analyses of factors predicting accuracy were performed. During the study period, 949 fractures were included and coded by 26 surgeons. For the intrarater analysis, overall agreements were κ = 0.67 (95% confidence interval [CI]: 0.64-0.70) and PA 69%. For interrater assessment, κ = 0.67 (95% CI: 0.62-0.72) and PA 69%. The accuracy of surgeons' blinded recoding was κ = 0.68 (95% CI: 0.65- 0.71) and PA 68%. Fracture type, frequency of the fracture, and segment fractured significantly influenced accuracy whereas the coder's experience did not. Both the reliability and accuracy of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for long-bone fractures ranged from substantial to excellent. Variations in coding accuracy seem to be related more to the fracture itself than the surgeon. Diagnostic study, level I.

  6. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  7. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots

    PubMed Central

    Gilbert, Hunter B.; Webster, Robert J.

    2016-01-01

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473

  8. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    PubMed

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  9. Intertester reliability of the acceptable noise level.

    PubMed

    Gordon-Hickey, Susan; Adams, Elizabeth; Moore, Robert; Gaal, Ashley; Berry, Katie; Brock, Sommer

    2012-01-01

    The acceptable noise level (ANL) serves to accurately predict the listener's likelihood of success with amplification. It has been proposed as a pre-hearing aid fitting protocol for hearing aid selection and counseling purposes. The ANL is a subjective measure of the listener's ability to accept background noise. Measurement of ANL relies on the tester and listener to follow the instructions set forth. To date, no research has explored the reliability of ANL as measured across clinicians or testers. To examine the intertester reliability of ANL. A descriptive quasi-experimental reliability study was completed. ANL was measured for one group of listeners by three testers. Three participants served as testers. Each tester was familiar with basic audiometry. Twenty-five young adults with normal hearing served as listeners. Each tester was stationed in a laboratory with the needed equipment. Listeners were instructed to report to these laboratories in a random order provided by the experimenters. The testers assessed most comfortable listening level (MCL) and background noise level (BNL) for all 25 listeners. Intraclass correlation coefficients were significant and revealed that MCL, BNL, and ANLs are reliable across testers. Additionally, one-way ANOVAs for MCL, BNL, and ANL were not significant. These findings indicate that MCL, BNL, and ANL do not differ significantly when measured by different testers. If the ANL instruction set is accurately followed, ANL can be reliably measured across testers, laboratories, and clinics. Intertester reliability of ANL allows for comparison across ANLs measured by different individuals. Findings of the present study indicate that tester reliability can be ruled out as a factor contributing to the disparity of mean ANLs reported in the literature. American Academy of Audiology.

  10. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  12. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less

  13. Can blind persons accurately assess body size from the voice?

    PubMed

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  14. Can blind persons accurately assess body size from the voice?

    PubMed Central

    Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-01-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20–65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  15. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  16. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  17. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  18. Accurate structural and spectroscopic characterization of prebiotic molecules: The neutral and cationic acetyl cyanide and their related species.

    PubMed

    Bellili, A; Linguerri, R; Hochlaf, M; Puzzarini, C

    2015-11-14

    In an effort to provide an accurate structural and spectroscopic characterization of acetyl cyanide, its two enolic isomers and the corresponding cationic species, state-of-the-art computational methods, and approaches have been employed. The coupled-cluster theory including single and double excitations together with a perturbative treatment of triples has been used as starting point in composite schemes accounting for extrapolation to the complete basis-set limit as well as core-valence correlation effects to determine highly accurate molecular structures, fundamental vibrational frequencies, and rotational parameters. The available experimental data for acetyl cyanide allowed us to assess the reliability of our computations: structural, energetic, and spectroscopic properties have been obtained with an overall accuracy of about, or better than, 0.001 Å, 2 kcal/mol, 1-10 MHz, and 11 cm(-1) for bond distances, adiabatic ionization potentials, rotational constants, and fundamental vibrational frequencies, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for guiding future experimental investigations and/or astronomical observations.

  19. Ring-like reliable PON planning with physical constraints for a smart grid

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gu, Rentao; Ji, Yuefeng

    2016-01-01

    Due to the high reliability requirements in the communication networks of a smart grid, a ring-like reliable PON is an ideal choice to carry power distribution information. Economical network planning is also very important for the smart grid communication infrastructure. Although the ring-like reliable PON has been widely used in the real applications, as far as we know, little research has been done on the network optimization subject of the ring-like reliable PON. Most PON planning research studies only consider a star-like topology or cascaded PON network, which barely guarantees the reliability requirements of the smart grid. In this paper, we mainly investigate the economical network planning problem for the ring-like reliable PON of the smart grid. To address this issue, we built a mathematical model for the planning problem of the ring-like reliable PON, and the objective was to minimize the total deployment costs under physical constraints. The model is simplified such that all of the nodes have the same properties, except OLT, because each potential splitter site can be located in the same ONU position in power communication networks. The simplified model is used to construct an optimal main tree topology in the complete graph and a backup-protected tree topology in the residual graph. An efficient heuristic algorithm, called the Constraints and Minimal Weight Oriented Fast Searching Algorithm (CMW-FSA), is proposed. In CMW-FSA, a feasible solution can be obtained directly with oriented constraints and a few recursive search processes. From the simulation results, the proposed planning model and CMW-FSA are verified to be accurate (the error rates are less than 0.4%) and effective compared with the accurate solution (CAESA), especially in small and sparse scenarios. The CMW-FSA significantly reduces the computation time compared with the CAESA. The time complexity algorithm of the CMW-FSA is acceptable and calculated as T(n) = O(n3). After evaluating the

  20. Reliability of Maximal Strength Testing in Novice Weightlifters

    NASA Technical Reports Server (NTRS)

    Loehr, James A.; Lee, Stuart M. C.; Feiveson, Alan H.; Ploutz-Snyder, Lori L.

    2009-01-01

    The one repetition maximum (1RM) is a criterion measure of muscle strength. However, the reliability of 1RM testing in novice subjects has received little attention. Understanding this information is crucial to accurately interpret changes in muscle strength. To evaluate the test-retest reliability of a squat (SQ), heel raise (HR), and deadlift (DL) 1RM in novice subjects. Twenty healthy males (31 plus or minus 5 y, 179.1 plus or minus 6.1 cm, 81.4 plus or minus 10.6 kg) with no weight training experience in the previous six months participated in four 1RM testing sessions, with each session separated by 5-7 days. SQ and HR 1RM were conducted using a smith machine; DL 1RM was assessed using free weights. Session 1 was considered a familiarization and was not included in the statistical analyses. Repeated measures analysis of variance with Tukey fs post-hoc tests were used to detect between-session differences in 1RM (p.0.05). Test-retest reliability was evaluated by intraclass correlation coefficients (ICC). During Session 2, the SQ and DL 1RM (SQ: 90.2 }4.3, DL: 75.9 }3.3 kg) were less than Session 3 (SQ: 95.3 }4.1, DL: 81.5 plus or minus 3.5 kg) and Session 4 (SQ: 96.6 }4.0, DL: 82.4 }3.9 kg), but there were no differences between Session 3 and Session 4. HR 1RM measured during Session 2 (150.1 }3.7 kg) and Session 3 (152.5 }3.9 kg) were not different from one another, but both were less than Session 4 (157.5 }3.8 kg). The reliability (ICC) of 1RM measures for Sessions 2-4 were 0.88, 0.83, and 0.87, for SQ, HR, and DL, respectively. When considering only Sessions 3 and 4, the reliability was 0.93, 0.91, and 0.86 for SQ, HR, and DL, respectively. One familiarization session and 2 test sessions (for SQ and DL) were required to obtain excellent reliability (ICC greater than or equal to 0.90) in 1RM values with novice subjects. We were unable to attain this level of reliability following 3 HR testing sessions therefore additional sessions may be required to obtain an

  1. Measurements using orthodontic analysis software on digital models obtained by 3D scans of plaster casts : Intrarater reliability and validity.

    PubMed

    Czarnota, Judith; Hey, Jeremias; Fuhrmann, Robert

    2016-01-01

    The purpose of this work was to determine the reliability and validity of measurements performed on digital models with a desktop scanner and analysis software in comparison with measurements performed manually on conventional plaster casts. A total of 20 pairs of plaster casts reflecting the intraoral conditions of 20 fully dentate individuals were digitized using a three-dimensional scanner (D700; 3Shape). A series of defined parameters were measured both on the resultant digital models with analysis software (Ortho Analyzer; 3Shape) and on the original plaster casts with a digital caliper (Digimatic CD-15DCX; Mitutoyo). Both measurement series were repeated twice and analyzed for intrarater reliability based on intraclass correlation coefficients (ICCs). The results from the digital models were evaluated for their validity against the casts by calculating mean-value differences and associated 95 % limits of agreement (Bland-Altman method). Statistically significant differences were identified via a paired t test. Significant differences were obtained for 16 of 24 tooth-width measurements, for 2 of 5 sites of contact-point displacement in the mandibular anterior segment, for overbite, for maxillary intermolar distance, for Little's irregularity index, and for the summation indices of maxillary and mandibular incisor width. Overall, however, both the mean differences between the results obtained on the digital models versus on the plaster casts and the dispersion ranges associated with these differences suggest that the deviations incurred by the digital measuring technique are not clinically significant. Digital models are adequately reproducible and valid to be employed for routine measurements in orthodontic practice.

  2. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  3. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    PubMed

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  4. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    DTIC Science & Technology

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  5. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  6. Predicting Next Year's Resources--Short-Term Enrollment Forecasting for Accurate Budget Planning. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    Salley, Charles D.

    Accurate enrollment forecasts are a prerequisite for reliable budget projections. This is because tuition payments make up a significant portion of a university's revenue, and anticipated revenue is the immediate constraint on current operating expenditures. Accurate forecasts are even more critical to revenue projections when a university's…

  7. Metrological Reliability of Medical Devices

    NASA Astrophysics Data System (ADS)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  8. Validity and Reliability of Baseline Testing in a Standardized Environment.

    PubMed

    Higgins, Kathryn L; Caze, Todd; Maerlender, Arthur

    2017-08-11

    The Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) is a computerized neuropsychological test battery commonly used to determine cognitive recovery from concussion based on comparing post-injury scores to baseline scores. This model is based on the premise that ImPACT baseline test scores are a valid and reliable measure of optimal cognitive function at baseline. Growing evidence suggests that this premise may not be accurate and a large contributor to invalid and unreliable baseline test scores may be the protocol and environment in which baseline tests are administered. This study examined the effects of a standardized environment and administration protocol on the reliability and performance validity of athletes' baseline test scores on ImPACT by comparing scores obtained in two different group-testing settings. Three hundred-sixty one Division 1 cohort-matched collegiate athletes' baseline data were assessed using a variety of indicators of potential performance invalidity; internal reliability was also examined. Thirty-one to thirty-nine percent of the baseline cases had at least one indicator of low performance validity, but there were no significant differences in validity indicators based on environment in which the testing was conducted. Internal consistency reliability scores were in the acceptable to good range, with no significant differences between administration conditions. These results suggest that athletes may be reliably performing at levels lower than their best effort would produce. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Obtaining accurate glucose measurements from wild animals under field conditions: comparing a hand held glucometer with a standard laboratory technique in grey seals

    PubMed Central

    Turner, Lucy M.; Millward, Sebastian; Moss, Simon E. W.; Hall, Ailsa J.

    2017-01-01

    Abstract Glucose is an important metabolic fuel and circulating levels are tightly regulated in most mammals, but can drop when body fuel reserves become critically low. Glucose is mobilized rapidly from liver and muscle during stress in response to increased circulating cortisol. Blood glucose levels can thus be of value in conservation as an indicator of nutritional status and may be a useful, rapid assessment marker for acute or chronic stress. However, seals show unusual glucose regulation: circulating levels are high and insulin sensitivity is limited. Accurate blood glucose measurement is therefore vital to enable meaningful health and physiological assessments in captive, wild or rehabilitated seals and to explore its utility as a marker of conservation relevance in these animals. Point-of-care devices are simple, portable, relatively cheap and use less blood compared with traditional sampling approaches, making them useful in conservation-related monitoring. We investigated the accuracy of a hand-held glucometer for ‘instant’ field measurement of blood glucose, compared with blood drawing followed by laboratory testing, in wild grey seals (Halichoerus grypus), a species used as an indicator for Good Environmental Status in European waters. The glucometer showed high precision, but low accuracy, relative to laboratory measurements, and was least accurate at extreme values. It did not provide a reliable alternative to plasma analysis. Poor correlation between methods may be due to suboptimal field conditions, greater and more variable haematocrit, faster erythrocyte settling rate and/or lipaemia in seals. Glucometers must therefore be rigorously tested before use in new species and demographic groups. Sampling, processing and glucose determination methods have major implications for conclusions regarding glucose regulation, and health assessment in seals generally, which is important in species of conservation concern and in development of circulating

  10. A validation of the construct and reliability of an emotional intelligence scale applied to nursing students1

    PubMed Central

    Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia

    2015-01-01

    OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642

  11. Design Strategy for a Formally Verified Reliable Computing Platform

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.

    1991-01-01

    This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.

  12. Evaluation of accuracy, reliability, and repeatability of five dental pulp tests.

    PubMed

    Chen, Eugene; Abbott, Paul V

    2011-12-01

    The aim of this study was to compare the clinical accuracy, reliability, and repeatability of laser Doppler flowmetry (LDF), an electric pulp test (EPT), and various thermal pulp sensibility tests. Pulp tests were done on 121 teeth in 20 subjects by using LDF, EPT, and thermal pulp testing (CO(2), Endo Frost [EF], Ice) during 2 or 3 test sessions with at least 1-week intervals. The order of testing was reversed on the second visit. A laser Doppler flowmeter was used to measure mean pulp blood flow (Flux) calibrated against a brownian motion medium and zeroed against a static reflector. The laser source was 780 nm, with 0.5-mm fiber separation in the probe, 3.1 kHz as the primary bandwidth for filter set to 0.1-second time output constant. Customized polyvinylsiloxane splints were fabricated for each participant, and a minimum of 90-second recording time was used for each tooth. Raw data were analyzed by using repeated measure analysis of variance, pairwise comparisons, and interclass correlations (ICC). The accuracy of EPT, CO(2), and LDF tests was 97.7%, 97.0%, and 96.3%, respectively, without significant differences (P > .3). Accuracy of EF and Ice was 90.7% and 84.8%, respectively. EPT (P = .015) and CO(2) (P = .022) were significantly more accurate than EF. LDF was more accurate than EF, but this was not statistically significant (P = .063). Ice was significantly less accurate than EPT (P = .004), CO(2) (P = .005), LDF (P = .006), and EF (P = .019). With the exception of Ice (effect of visit: F(2,38) = 5.67, mean squared error = 0.01, P = .007, η(2)(p) = 0.23), all tests were reliable. Ice (ICC = 0.677) and LDF (ICC = 0.654) were the most repeatable of the tests, whereas EPT (ICC = 0.434) and CO(2) (ICC = 0.432) were less repeatable. CO(2), EPT, and LDF were reliable and the most accurate tests, but CO(2) and EPT were less repeatable yet less time-consuming than LDF. EF was reliable but not as accurate as EPT and CO(2) and less repeatable than Ice and LDF

  13. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    PubMed Central

    Li, Mengmeng; Feng, Qiang; Yang, Dezhen

    2018-01-01

    In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion. PMID:29584695

  14. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  15. Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System

    PubMed Central

    Fox, Mark; Papadopoulos, Amy; Crump, Cindy

    2013-01-01

    Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640

  16. Monitoring nutritional status accurately and reliably in adolescents with anorexia nervosa.

    PubMed

    Martin, Andrew C; Pascoe, Elaine M; Forbes, David A

    2009-01-01

    Accurate assessment of nutritional status is a vital aspect of caring for individuals with anorexia nervosa (AN) and body mass index (BMI) is considered an appropriate and easy to use tool. Because of the intense fear of weight gain, some individuals may attempt to mislead the physician. Mid-upper arm circumference (MUAC) is a simple, objective method of assessing nutritional status. The setting is an eating disorders clinic in a tertiary paediatric hospital in Western Australia. The aim of this study is to evaluate how well MUAC correlates with BMI in adolescents with AN. Prospective observational study to evaluate nutritional status in adolescents with AN. Fifty-five adolescents aged 12-17 years with AN were assessed between January 1, 2004 and January 1, 2006. MUAC was highly correlated with BMI (r = 0.79, P < 0.001) and individuals with MUAC >or=20 cm rarely required hospitalisation (negative predictive value 93%). MUAC reflects nutritional status as defined by BMI in adolescents with AN. Lack of consistency between longitudinal measurements of BMI and MUAC should be viewed suspiciously and prompt a more detailed nutritional assessment.

  17. Reliable retrieval of atmospheric and aquatic parameters in coastal and inland environments from polar-orbiting and geostationary platforms: challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Stamnes, Knut; Li, Wei; Lin, Zhenyi; Fan, Yongzhen; Chen, Nan; Gatebe, Charles; Ahn, Jae-Hyun; Kim, Wonkook; Stamnes, Jakob J.

    2017-04-01

    Simultaneous retrieval of aerosol and surface properties by means of inverse techniques based on a coupled atmosphere-surface radiative transfer model, neural networks, and optimal estimation can yield considerable improvements in retrieval accuracy in complex aquatic environments compared with traditional methods. Remote sensing of such environments represent specific challenges due (i) the complexity of the atmosphere and water inherent optical properties, (ii) unique bidirectional dependencies of the water-leaving radiance, and (iii) the desire to do retrievals for large solar zenith and viewing angles. We will discuss (a) how challenges related to atmospheric gaseous absorption, absorbing aerosols, and turbid waters can be addressed by using a coupled atmosphere-surface radiative transfer (forward) model in the retrieval process, (b) how the need to correct for bidirectional effects can be accommodated in a systematic and reliable manner, (c) how polarization information can be utilized, (d) how the curvature of the atmosphere can be taken into account, and (e) how neural networks and optimal estimation can be used to obtain fast yet accurate retrievals. Special emphasis will be placed on how information from existing and future sensors deployed on polar-orbiting and geostationary platforms can be obtained in a reliable and accurate manner. The need to provide uncertainty assessments and error budgets will also be discussed.

  18. The obturator oblique and iliac oblique/outlet views predict most accurately the adequate position of an anterior column acetabular screw.

    PubMed

    Guimarães, João Antonio Matheus; Martin, Murphy P; da Silva, Flávio Ribeiro; Duarte, Maria Eugenia Leite; Cavalcanti, Amanda Dos Santos; Machado, Jamila Alessandra Perini; Mauffrey, Cyril; Rojas, David

    2018-06-08

    Percutaneous fixation of the acetabulum is a treatment option for select acetabular fractures. Intra-operative fluoroscopy is required, and despite various described imaging strategies, it is debatable as to which combination of fluoroscopic views provides the most accurate and reliable assessment of screw position. Using five synthetic pelvic models, an experimental setup was created in which the anterior acetabular columns were instrumented with screws in five distinct trajectories. Five fluoroscopic images were obtained of each model (Pelvic Inlet, Obturator Oblique, Iliac Oblique, Obturator Oblique/Outlet, and Iliac Oblique/Outlet). The images were presented to 32 pelvic and acetabular orthopaedic surgeons, who were asked to draw two conclusions regarding screw position: (1) whether the screw was intra-articular and (2) whether the screw was intraosseous in its distal course through the bony corridor. In the assessment of screw position relative to the hip joint, accuracy of surgeon's response ranged from 52% (iliac oblique/outlet) to 88% (obturator oblique), with surgeon confidence in the interpretation ranging from 60% (pelvic inlet) to 93% (obturator oblique) (P < 0.0001). In the assessment of intraosseous position of the screw, accuracy of surgeon's response ranged from 40% (obturator oblique/outlet) to 79% (iliac oblique/outlet), with surgeon confidence in the interpretation ranging from 66% (iliac oblique) to 88% (pelvic inlet) (P < 0.0001). The obturator oblique and obturator oblique/outlet views afforded the most accurate and reliable assessment of penetration into the hip joint, and intraosseous position of the screw was most accurately assessed with pelvic inlet and iliac oblique/outlet views. Clinical Question.

  19. Egnos-Based Multi-Sensor Accurate and Reliable Navigation in Search-And Missions with Uavs

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Stebler, Y.; Skaloud, J.; Kornus, W.; Prades, R.

    2011-09-01

    This paper will introduce and describe the goals, concept and overall approach of the European 7th Framework Programme's project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost SAR operations'. The goal of CLOSE-SEARCH is to integrate in a helicopter-type unmanned aerial vehicle, a thermal imaging sensor and a multi-sensor navigation system (based on the use of a Barometric Altimeter (BA), a Magnetometer (MAGN), a Redundant Inertial Navigation System (RINS) and an EGNOS-enabled GNSS receiver) with an Autonomous Integrity Monitoring (AIM) capability, to support the search component of Search-And-Rescue operations in remote, difficult-to-access areas and/or in time critical situations. The proposed integration will result in a hardware and software prototype that will demonstrate an end-to-end functionality, that is to fly in patterns over a region of interest (possibly inaccessible) during day or night and also under adverse weather conditions and locate there disaster survivors or lost people through the detection of the body heat. This paper will identify the technical challenges of the proposed approach, from navigating with a BA/MAGN/RINS/GNSS-EGNOSbased integrated system to the interpretation of thermal images for person identification. Moreover, the AIM approach will be described together with the proposed integrity requirements. Finally, this paper will show some results obtained in the project during the first test campaign performed on November 2010. On that day, a prototype was flown in three different missions to assess its high-level performance and to observe some fundamental mission parameters as the optimal flying height and flying speed to enable body recognition. The second test campaign is scheduled for the end of 2011.

  20. Approximation of reliabilities for multiple-trait model with maternal effects.

    PubMed

    Strabel, T; Misztal, I; Bertrand, J K

    2001-04-01

    Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.

  1. Managing Reliability in the 21st Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellin, T.A.

    1998-11-23

    The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less

  2. The quadrant method measuring four points is as a reliable and accurate as the quadrant method in the evaluation after anatomical double-bundle ACL reconstruction.

    PubMed

    Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro

    2017-11-20

    The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for

  3. Fast, accurate, and reliable molecular docking with QuickVina 2.

    PubMed

    Alhossary, Amr; Handoko, Stephanus Daniel; Mu, Yuguang; Kwoh, Chee-Keong

    2015-07-01

    The need for efficient molecular docking tools for high-throughput screening is growing alongside the rapid growth of drug-fragment databases. AutoDock Vina ('Vina') is a widely used docking tool with parallelization for speed. QuickVina ('QVina 1') then further enhanced the speed via a heuristics, requiring high exhaustiveness. With low exhaustiveness, its accuracy was compromised. We present in this article the latest version of QuickVina ('QVina 2') that inherits both the speed of QVina 1 and the reliability of the original Vina. We tested the efficacy of QVina 2 on the core set of PDBbind 2014. With the default exhaustiveness level of Vina (i.e. 8), a maximum of 20.49-fold and an average of 2.30-fold acceleration with a correlation coefficient of 0.967 for the first mode and 0.911 for the sum of all modes were attained over the original Vina. A tendency for higher acceleration with increased number of rotatable bonds as the design variables was observed. On the accuracy, Vina wins over QVina 2 on 30% of the data with average energy difference of only 0.58 kcal/mol. On the same dataset, GOLD produced RMSD smaller than 2 Å on 56.9% of the data while QVina 2 attained 63.1%. The C++ source code of QVina 2 is available at (www.qvina.org). aalhossary@pmail.ntu.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  5. Reliability of internal oblique elbow radiographs for measuring displacement of medial epicondyle humerus fractures: a cadaveric study.

    PubMed

    Gottschalk, Hilton P; Bastrom, Tracey P; Edmonds, Eric W

    2013-01-01

    Standard elbow radiographs (AP and lateral views) are not accurate enough to measure true displacement of medial epicondyle fractures of the humerus. The amount of perceived displacement has been used to determine treatment options. This study assesses the utility of internal oblique radiographs for measurement of true displacement in these fractures. A medial epicondyle fracture was created in a cadaveric specimen. Displacement of the fragment (mm) was set at 5, 10, and 15 in line with the vector of the flexor pronator mass. The fragment was sutured temporarily in place. Radiographs were obtained at 0 (AP), 15, 30, 45, 60, 75, and 90 degrees (lateral) of internal rotation, with the elbow in set positions of flexion. This was done with and without radio-opaque markers placed on the fragment and fracture bed. The 45 and 60 degrees internal oblique radiographs were then presented to 5 separate reviewers (of different levels of training) to evaluate intraobserver and interobserver agreement. Change in elbow position did not affect the perceived displacement (P=0.82) with excellent intraobserver reliability (intraclass correlation coefficient range, 0.979 to 0.988) and interobserver agreement of 0.953. The intraclass correlation coefficient for intraobserver reliability on 45 degrees internal oblique films for all groups ranged from 0.985 to 0.998, with interobserver agreement of 0.953. For predicting displacement, the observers were 60% accurate in predicting the true displacement on the 45 degrees internal oblique films and only 35% accurate using the 60 degrees internal oblique view. Standardizing to a 45 degrees internal oblique radiograph of the elbow (regardless of elbow flexion) can augment the treating surgeon's ability to determine true displacement. At this degree of rotation, the measured number can be multiplied by 1.4 to better estimate displacement. The addition of a 45 degrees internal oblique radiograph in medial humeral epicondyle fractures has good

  6. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  7. The use of evidence-based guidance to enable reliable and accurate measurements of the home environment

    PubMed Central

    Atwal, Anita; McIntyre, Anne

    2017-01-01

    Introduction High quality guidance in home strategies is needed to enable older people to measure their home environment and become involved in the provision of assistive devices and to promote consistency among professionals. This study aims to investigate the reliability of such guidance and its ability to promote accuracy of results when measurements are taken by both older people and professionals. Method Twenty-five health professionals and 26 older people participated in a within-group design to test the accuracy of measurements taken (that is, person’s popliteal height, baths, toilets, beds, stairs and chairs). Data were analysed with descriptive analysis and the Wilcoxon test. The intra-rater reliability was assessed by correlating measurements taken at two different times with guidance use. Results The intra-rater reliability analysis revealed statistical significance (P < 0.05) for all measurements except for the bath internal width. The guidance enabled participants to take 90% of measurements that they were not able to complete otherwise, 80.55% of which lay within the acceptable suggested margin of variation. Accuracy was supported by the significant reduction in the standard deviation of the actual measurements and accuracy scores. Conclusion This evidence-based guidance can be used in its current format by older people and professionals to facilitate appropriate measurements. Yet, some users might need help from carers or specialists depending on their impairments. PMID:29386701

  8. The use of evidence-based guidance to enable reliable and accurate measurements of the home environment.

    PubMed

    Spiliotopoulou, Georgia; Atwal, Anita; McIntyre, Anne

    2018-01-01

    High quality guidance in home strategies is needed to enable older people to measure their home environment and become involved in the provision of assistive devices and to promote consistency among professionals. This study aims to investigate the reliability of such guidance and its ability to promote accuracy of results when measurements are taken by both older people and professionals. Twenty-five health professionals and 26 older people participated in a within-group design to test the accuracy of measurements taken (that is, person's popliteal height, baths, toilets, beds, stairs and chairs). Data were analysed with descriptive analysis and the Wilcoxon test. The intra-rater reliability was assessed by correlating measurements taken at two different times with guidance use. The intra-rater reliability analysis revealed statistical significance ( P  < 0.05) for all measurements except for the bath internal width. The guidance enabled participants to take 90% of measurements that they were not able to complete otherwise, 80.55% of which lay within the acceptable suggested margin of variation. Accuracy was supported by the significant reduction in the standard deviation of the actual measurements and accuracy scores. This evidence-based guidance can be used in its current format by older people and professionals to facilitate appropriate measurements. Yet, some users might need help from carers or specialists depending on their impairments.

  9. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  10. On the reliability of computed chaotic solutions of non-linear differential equations

    NASA Astrophysics Data System (ADS)

    Liao, Shijun

    2009-08-01

    A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.

  11. Validity and reliability of naturalistic driving scene categorization Judgments from crowdsourcing.

    PubMed

    Cabrall, Christopher D D; Lu, Zhenji; Kyriakidis, Miltos; Manca, Laura; Dijksterhuis, Chris; Happee, Riender; de Winter, Joost

    2018-05-01

    A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for

  12. GalaxyTBM: template-based modeling by building a reliable core and refining unreliable local regions.

    PubMed

    Ko, Junsu; Park, Hahnbeom; Seok, Chaok

    2012-08-10

    Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.

  13. Development and positioning reliability of a TMS coil holder for headache research.

    PubMed

    Chronicle, Edward P; Pearson, A Jane; Matthews, Cheryl

    2005-01-01

    Accurate and reproducible coil positioning is important for headache research using transcranial magnetic stimulation protocols. We aimed to design a transcranial magnetic stimulation coil holder and demonstrate reliability of test-retest coil positioning. A coil holder was developed and manufactured according to three principles of stability, durability, and three-dimensional positional accuracy. Reliability of coil positioning was assessed by stimulating over the motor cortex of four neurologically normal subjects and recording finger muscle responses, both at a test phase and a retest phase several hours later. In all four subjects, repositioning of the transcranial magnetic stimulation coil solely on the basis of coil holder coordinates was accurate to within 2 mm. The coil holder demonstrated good test-retest reliability of coil positioning, and is thus a promising tool for transcranial magnetic stimulation-based headache research, particularly studies of prophylactic drug effect where several laboratory visits with identical coil positioning are necessary.

  14. Validation and Improvement of Reliability Methods for Air Force Building Systems

    DTIC Science & Technology

    focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that

  15. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  16. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  17. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  18. A Multidisciplinary Assessment of Faculty Accuracy and Reliability with Bloom's Taxonomy

    ERIC Educational Resources Information Center

    Welch, Adam C.; Karpen, Samuel C.; Cross, L. Brian; LeBlanc, Brandie N.

    2017-01-01

    The aims of this study were to determine faculty's ability to accurately and reliably categorize exam questions using Bloom's Taxonomy, and if modified versions would improve the accuracy and reliability. Faculty experience and affiliation with a health sciences discipline were also considered. Faculty at one university were asked to categorize 30…

  19. Reliable enumeration of malaria parasites in thick blood films using digital image analysis.

    PubMed

    Frean, John A

    2009-09-23

    Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a

  20. 78 FR 73424 - Retirement of Requirements in Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    ... predicated on the view that many violations of requirements currently included in Reliability Standards pose... for Bulk-Power System reliability or may be redundant. The Commission is interested in obtaining views... redundant. The Commission is interested in obtaining views on whether such requirements could be removed...

  1. Assessing reliability and validity measures in managed care studies.

    PubMed

    Montoya, Isaac D

    2003-01-01

    To review the reliability and validity literature and develop an understanding of these concepts as applied to managed care studies. Reliability is a test of how well an instrument measures the same input at varying times and under varying conditions. Validity is a test of how accurately an instrument measures what one believes is being measured. A review of reliability and validity instructional material was conducted. Studies of managed care practices and programs abound. However, many of these studies utilize measurement instruments that were developed for other purposes or for a population other than the one being sampled. In other cases, instruments have been developed without any testing of the instrument's performance. The lack of reliability and validity information may limit the value of these studies. This is particularly true when data are collected for one purpose and used for another. The usefulness of certain studies without reliability and validity measures is questionable, especially in cases where the literature contradicts itself

  2. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  3. Extracting More Information from Passive Optical Tracking Observations for Reliable Orbit Element Generation

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Gehly, S.

    2016-09-01

    This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).

  4. Washable and Reliable Textile Electrodes Embedded into Underwear Fabric for Electrocardiography (ECG) Monitoring

    PubMed Central

    Ankhili, Amale; Tao, Xuyuan; Cochrane, Cédric; Coulon, David; Koncar, Vladan

    2018-01-01

    A medical quality electrocardiogram (ECG) signal is necessary for permanent monitoring, and an accurate heart examination can be obtained from instrumented underwear only if it is equipped with high-quality, flexible, textile-based electrodes guaranteeing low contact resistance with the skin. The main objective of this article is to develop reliable and washable ECG monitoring underwear able to record and wirelessly send an ECG signal in real time to a smart phone and further to a cloud. The article focuses on textile electrode design and production guaranteeing optimal contact impedance. Therefore, different types of textile fabrics were coated with modified poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) in order to develop and manufacture reliable and washable textile electrodes assembled to female underwear (bras), by sewing using commercially available conductive yarns. Washability tests of connected underwear containing textile electrodes and conductive threads were carried out up to 50 washing cycles. The influence of standardized washing cycles on the quality of ECG signals and the electrical properties of the textile electrodes were investigated and characterized. PMID:29414849

  5. Washable and Reliable Textile Electrodes Embedded into Underwear Fabric for Electrocardiography (ECG) Monitoring.

    PubMed

    Ankhili, Amale; Tao, Xuyuan; Cochrane, Cédric; Coulon, David; Koncar, Vladan

    2018-02-07

    A medical quality electrocardiogram (ECG) signal is necessary for permanent monitoring, and an accurate heart examination can be obtained from instrumented underwear only if it is equipped with high-quality, flexible, textile-based electrodes guaranteeing low contact resistance with the skin. The main objective of this article is to develop reliable and washable ECG monitoring underwear able to record and wirelessly send an ECG signal in real time to a smart phone and further to a cloud. The article focuses on textile electrode design and production guaranteeing optimal contact impedance. Therefore, different types of textile fabrics were coated with modified poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) in order to develop and manufacture reliable and washable textile electrodes assembled to female underwear (bras), by sewing using commercially available conductive yarns. Washability tests of connected underwear containing textile electrodes and conductive threads were carried out up to 50 washing cycles. The influence of standardized washing cycles on the quality of ECG signals and the electrical properties of the textile electrodes were investigated and characterized.

  6. On the reliability and limitations of the SPAC method with a directional wavefield

    NASA Astrophysics Data System (ADS)

    Luo, Song; Luo, Yinhe; Zhu, Lupei; Xu, Yixian

    2016-03-01

    The spatial autocorrelation (SPAC) method is one of the most efficient ways to extract phase velocities of surface waves from ambient seismic noise. Most studies apply the method based on the assumption that the wavefield of ambient noise is diffuse. However, the actual distribution of sources is neither diffuse nor stationary. In this study, we examined the reliability and limitations of the SPAC method with a directional wavefield. We calculated the SPAC coefficients and phase velocities from a directional wavefield for a four-layer model and characterized the limitations of the SPAC. We then applied the SPAC method to real data in Karamay, China. Our results show that, 1) the SPAC method can accurately measure surface wave phase velocities from a square array with a directional wavefield down to a wavelength of twice the shortest interstation distance; and 2) phase velocities obtained from real data by the SPAC method are stable and reliable, which demonstrates that this method can be applied to measure phase velocities in a square array with a directional wavefield.

  7. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher

  8. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  9. Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity.

    PubMed

    Santos, Fabiane Igansi de Castro Dos; Marini, Naciele; Santos, Railson Schreinert Dos; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio; de Oliveira, Antonio Costa

    2018-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here.

  10. Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity

    PubMed Central

    dos Santos, Fabiane Igansi de Castro; Marini, Naciele; dos Santos, Railson Schreinert; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio

    2018-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here. PMID:29494624

  11. Biopsy Specimens Obtained 7 Days After Starting Chemoradiotherapy (CRT) Provide Reliable Predictors of Response to CRT for Rectal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Toshiyuki; Sadahiro, Sotaro, E-mail: sadahiro@is.icc.u-tokai.ac.jp; Tanaka, Akira

    2013-04-01

    Purpose: Preoperative chemoradiation therapy (CRT) significantly decreases local recurrence in locally advanced rectal cancer. Various biomarkers in biopsy specimens obtained before CRT have been proposed as predictors of response. However, reliable biomarkers remain to be established. Methods and Materials: The study group comprised 101 consecutive patients with locally advanced rectal cancer who received preoperative CRT with oral uracil/tegafur (UFT) or S-1. We evaluated histologic findings on hematoxylin and eosin (H and E) staining and immunohistochemical expressions of Ki67, p53, p21, and apoptosis in biopsy specimens obtained before CRT and 7 days after starting CRT. These findings were contrasted with themore » histologic response and the degree of tumor shrinkage. Results: In biopsy specimens obtained before CRT, histologic marked regression according to the Japanese Classification of Colorectal Carcinoma (JCCC) criteria and the degree of tumor shrinkage on barium enema examination (BE) were significantly greater in patients with p21-positive tumors than in those with p21-negative tumors (P=.04 and P<.01, respectively). In biopsy specimens obtained 7 days after starting CRT, pathologic complete response, histologic marked regression according to both the tumor regression criteria and JCCC criteria, and T downstaging were significantly greater in patients with apoptosis-positive and p21-positive tumors than in those with apoptosis-negative (P<.01, P=.02, P=.01, and P<.01, respectively) or p21-negative tumors (P=.03, P<.01, P<.01, and P=.02, respectively). The degree of tumor shrinkage on both BE as well as MRI was significantly greater in patients with apoptosis-positive and with p21-positive tumors than in those with apoptosis-negative or p21-negative tumors, respectively. Histologic changes in H and E-stained biopsy specimens 7 days after starting CRT significantly correlated with pathologic complete response and marked regression on both JCCC and tumor

  12. Methods to compute reliabilities for genomic predictions of feed intake

    USDA-ARS?s Scientific Manuscript database

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  13. Yes, one can obtain better quality structures from routine X-ray data collection.

    PubMed

    Sanjuan-Szklarz, W Fabiola; Hoser, Anna A; Gutmann, Matthias; Madsen, Anders Østergaard; Woźniak, Krzysztof

    2016-01-01

    Single-crystal X-ray diffraction structural results for benzidine dihydrochloride, hydrated and protonated N,N,N,N-peri(dimethylamino)naphthalene chloride, triptycene, dichlorodimethyltriptycene and decamethylferrocene have been analysed. A critical discussion of the dependence of structural and thermal parameters on resolution for these compounds is presented. Results of refinements against X-ray data, cut off to different resolutions from the high-resolution data files, are compared to structural models derived from neutron diffraction experiments. The Independent Atom Model (IAM) and the Transferable Aspherical Atom Model (TAAM) are tested. The average differences between the X-ray and neutron structural parameters (with the exception of valence angles defined by H atoms) decrease with the increasing 2θmax angle. The scale of differences between X-ray and neutron geometrical parameters can be significantly reduced when data are collected to the higher, than commonly used, 2θmax diffraction angles (for Mo Kα 2θmax > 65°). The final structural and thermal parameters obtained for the studied compounds using TAAM refinement are in better agreement with the neutron values than the IAM results for all resolutions and all compounds. By using TAAM, it is still possible to obtain accurate results even from low-resolution X-ray data. This is particularly important as TAAM is easy to apply and can routinely be used to improve the quality of structural investigations [Dominiak (2015 ▸). LSDB from UBDB. University of Buffalo, USA]. We can recommend that, in order to obtain more adequate (more accurate and precise) structural and displacement parameters during the IAM model refinement, data should be collected up to the larger diffraction angles, at least, for Mo Kα radiation to 2θmax = 65° (sin θmax/λ < 0.75 Å(-1)). The TAAM approach is a very good option to obtain more adequate results even using data collected to the lower 2θmax angles. Also

  14. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  15. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  16. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  17. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    NASA Astrophysics Data System (ADS)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  18. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  19. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  20. Validity and reliability of intraoral scanners compared to conventional gypsum models measurements: a systematic review.

    PubMed

    Aragón, Mônica L C; Pontes, Luana F; Bichara, Lívia M; Flores-Mir, Carlos; Normando, David

    2016-08-01

    The development of 3D technology and the trend of increasing the use of intraoral scanners in dental office routine lead to the need for comparisons with conventional techniques. To determine if intra- and inter-arch measurements from digital dental models acquired by an intraoral scanner are as reliable and valid as the similar measurements achieved from dental models obtained through conventional intraoral impressions. An unrestricted electronic search of seven databases until February 2015. Studies that focused on the accuracy and reliability of images obtained from intraoral scanners compared to images obtained from conventional impressions. After study selection the QUADAS risk of bias assessment tool for diagnostic studies was used to assess the risk of bias (RoB) among the included studies. Four articles were included in the qualitative synthesis. The scanners evaluated were OrthoProof, Lava, iOC intraoral, Lava COS, iTero and D250. These studies evaluated the reliability of tooth widths, Bolton ratio measurements, and image superimposition. Two studies were classified as having low RoB; one had moderate RoB and the remaining one had high RoB. Only one study evaluated the time required to complete clinical procedures and patient's opinion about the procedure. Patients reported feeling more comfortable with the conventional dental impression method. Associated costs were not considered in any of the included study. Inter- and intra-arch measurements from digital models produced from intraoral scans appeared to be reliable and accurate in comparison to those from conventional impressions. This assessment only applies to the intraoral scanners models considered in the finally included studies. Digital models produced by intraoral scan eliminate the need of impressions materials; however, currently, longer time is needed to take the digital images. PROSPERO (CRD42014009702). None. © The Author 2016. Published by Oxford University Press on behalf of the European

  1. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  2. Using stereophotogrammetric technology for obtaining intraoral digital impressions of implants.

    PubMed

    Pradíes, Guillermo; Ferreiroa, Alberto; Özcan, Mutlu; Giménez, Beatriz; Martínez-Rus, Francisco

    2014-04-01

    The procedure for making impressions of multiple implants continues to be a challenge, despite the various techniques proposed to date. The authors' objective in this case report is to describe a novel digital impression method for multiple implants involving the use of stereophotogrammetric technology. The authors present three cases of patients who had multiple implants in which the impressions were obtained with this technology. Initially, a stereo camera with an infrared flash detects the position of special flag abutments screwed into the implants. This process is based on registering the x, y and z coordinates of each implant and the distances between them. This information is converted into a stereolithographic (STL) file. To add the soft-tissue information, the user must obtain another STL file by using an intraoral or extraoral scanner. In the first case presented, this information was acquired from the plaster model with an extraoral scanner; in the second case, from a Digital Imaging and Communication in Medicine (DICOM) file of the plaster model obtained with cone-beam computed tomography; and in the third case, through an intraoral digital impression with a confocal scanner. In the three cases, the frameworks manufactured from this technique showed a correct clinical passive fit. At follow-up appointments held six, 12 and 24 months after insertion of the prosthesis, no complications were reported. Stereophotogrammetric technology is a viable, accurate and easy technique for making multiple implant impressions. Clinicians can use stereophotogrammetric technology to acquire reliable digital master models as a first step in producing frameworks with a correct passive fit.

  3. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2006-01-01

    Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

  4. Reliability reporting across studies using the Buss Durkee Hostility Inventory.

    PubMed

    Vassar, Matt; Hale, William

    2009-01-01

    Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scale's ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.

  5. Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex

    PubMed Central

    Rikhye, Rajeev V.

    2015-01-01

    Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial

  6. Reliability Analysis of the Adult Mentoring Assessment for Extension Professionals

    ERIC Educational Resources Information Center

    Denny, Marina D'Abreau

    2017-01-01

    The Adult Mentoring Assessment for Extension Professionals will help mentors develop an accurate profile of their mentoring style with adult learners and identify areas of proficiency and deficiency based on six constructs--relationship, information, facilitation, confrontation, modeling, and vision. This article reports on the reliability of this…

  7. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  8. Reliability of provocative tests of motion sickness susceptibility

    NASA Technical Reports Server (NTRS)

    Calkins, D. S.; Reschke, M. F.; Kennedy, R. S.; Dunlop, W. P.

    1987-01-01

    Test-retest reliability values were derived from motion sickness susceptibility scores obtained from two successive exposures to each of three tests: (1) Coriolis sickness sensitivity test; (2) staircase velocity movement test; and (3) parabolic flight static chair test. The reliability of the three tests ranged from 0.70 to 0.88. Normalizing values from predictors with skewed distributions improved the reliability.

  9. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  10. Design and experimentation of an empirical multistructure framework for accurate, sharp and reliable hydrological ensembles

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Anctil, F.; Roy, R.

    2017-09-01

    This paper outlines the design and experimentation of an Empirical Multistructure Framework (EMF) for lumped conceptual hydrological modeling. This concept is inspired from modular frameworks, empirical model development, and multimodel applications, and encompasses the overproduce and select paradigm. The EMF concept aims to reduce subjectivity in conceptual hydrological modeling practice and includes model selection in the optimisation steps, reducing initial assumptions on the prior perception of the dominant rainfall-runoff transformation processes. EMF generates thousands of new modeling options from, for now, twelve parent models that share their functional components and parameters. Optimisation resorts to ensemble calibration, ranking and selection of individual child time series based on optimal bias and reliability trade-offs, as well as accuracy and sharpness improvement of the ensemble. Results on 37 snow-dominated Canadian catchments and 20 climatically-diversified American catchments reveal the excellent potential of the EMF in generating new individual model alternatives, with high respective performance values, that may be pooled efficiently into ensembles of seven to sixty constitutive members, with low bias and high accuracy, sharpness, and reliability. A group of 1446 new models is highlighted to offer good potential on other catchments or applications, based on their individual and collective interests. An analysis of the preferred functional components reveals the importance of the production and total flow elements. Overall, results from this research confirm the added value of ensemble and flexible approaches for hydrological applications, especially in uncertain contexts, and open up new modeling possibilities.

  11. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  12. High sample throughput genotyping for estimating C-lineage introgression in the dark honeybee: an accurate and cost-effective SNP-based tool.

    PubMed

    Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice

    2018-06-04

    The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.

  13. The Reliability of Difference Scores in Populations and Samples

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2009-01-01

    This study was an investigation of the relation between the reliability of difference scores, considered as a parameter characterizing a population of examinees, and the reliability estimates obtained from random samples from the population. The parameters in familiar equations for the reliability of difference scores were redefined in such a way…

  14. Reliability of widefield nailfold capillaroscopy and video capillaroscopy in the assessment of patients with Raynaud’s phenomenon.

    PubMed

    Sekiyama, Juliana Y; Camargo, Cintia Z; Eduardo, Luís; Andrade, C; Kayser, Cristiane

    2013-11-01

    To analyze the diagnostic performance and reliability of different parameters evaluated by widefield nailfold capillaroscopy (NFC) with those obtained by video capillaroscopy in patients with Raynaud’s phenomenon (RP). Two hundred fifty-two individuals were assessed, including 101 systemic sclerosis (SSc; scleroderma) patients,61 patients with undifferentiated connective tissue disease, 37 patients with primary RP, and 53 controls. Widefield NFC was performed using a stereomicroscope under 10–25 x magnification and direct measurement of all parameters. Video capillaroscopy was performed under 200 x magnification, with the acquirement of 32 images per individual (4 fields per finger in 8 fingers). The following parameters were analyzed in 8 fingers of the hands (excluding thumbs) by both methods: number of capillaries/mm, number of enlarged and giant capillaries, microhemorrhages, and avascular score.Intra- and interobserver reliability was evaluated by performing both examinations in 20 individuals on 2 different days and by 2 long-term experienced observers. There was a significant correlation (P < 0.000) between widefield NFC and video capillaroscopy in the comparison of all parameters. Kappa values and intraclass correlation coefficient analysis showed excellent intra- and interobserver reproducibility for all parameters evaluated by widefield NFC and video capillaroscopy. Bland-Altman analysis showed high agreement of all parameters evaluated in both methods. According to receiver operating characteristic curve analysis, both methods showed a similar performance in discriminating SSc patients from controls. Widefield NFC and video capillaroscopy are reliable and accurate methods and can be used equally for assessing peripheral microangiopathy in RP and SSc patients. Nonetheless, the high reliability obtained may not be similar for less experienced examiners.

  15. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  16. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  17. Reliable sagittal plane kinematic gait assessments are feasible using low-cost webcam technology.

    PubMed

    Saner, Robert J; Washabaugh, Edward P; Krishnan, Chandramouli

    2017-07-01

    Three-dimensional (3-D) motion capture systems are commonly used for gait analysis because they provide reliable and accurate measurements. However, the downside of this approach is that it is expensive and requires technical expertise; thus making it less feasible in the clinic. To address this limitation, we recently developed and validated (using a high-precision walking robot) a low-cost, two-dimensional (2-D) real-time motion tracking approach using a simple webcam and LabVIEW Vision Assistant. The purpose of this study was to establish the repeatability and minimal detectable change values of hip and knee sagittal plane gait kinematics recorded using this system. Twenty-one healthy subjects underwent two kinematic assessments while walking on a treadmill at a range of gait velocities. Intraclass correlation coefficients (ICC) and minimal detectable change (MDC) values were calculated for commonly used hip and knee kinematic parameters to demonstrate the reliability of the system. Additionally, Bland-Altman plots were generated to examine the agreement between the measurements recorded on two different days. The system demonstrated good to excellent reliability (ICC>0.75) for all the gait parameters tested on this study. The MDC values were typically low (<5°) for most of the parameters. The Bland-Altman plots indicated that there was no systematic error or bias in kinematic measurements and showed good agreement between measurements obtained on two different days. These results indicate that kinematic gait assessments using webcam technology can be reliably used for clinical and research purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. How Reliable is the Acetabular Cup Position Assessment from Routine Radiographs?

    PubMed Central

    Carvajal Alba, Jaime A.; Vincent, Heather K.; Sodhi, Jagdeep S.; Latta, Loren L.; Parvataneni, Hari K.

    2017-01-01

    Abstract Background: Cup position is crucial for optimal outcomes in total hip arthroplasty. Radiographic assessment of component position is routinely performed in the early postoperative period. Aims: The aims of this study were to determine in a controlled environment if routine radiographic methods accurately and reliably assess the acetabular cup position and to assess if there is a statistical difference related to the rater’s level of training. Methods: A pelvic model was mounted in a spatial frame. An acetabular cup was fixed in different degrees of version and inclination. Standardized radiographs were obtained. Ten observers including five fellowship-trained orthopaedic surgeons and five orthopaedic residents performed a blind assessment of cup position. Inclination was assessed from anteroposterior radiographs of the pelvis and version from cross-table lateral radiographs of the hip. Results: The radiographic methods used showed to be imprecise specially when the cup was positioned at the extremes of version and inclination. An excellent inter-observer reliability (Intra-class coefficient > 0,9) was evidenced. There were no differences related to the level of training of the raters. Conclusions: These widely used radiographic methods should be interpreted cautiously and computed tomography should be utilized in cases when further intervention is contemplated. PMID:28852355

  19. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  20. The Americleft Speech Project: A Training and Reliability Study.

    PubMed

    Chapman, Kathy L; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2016-01-01

    To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. The participants were speech-language pathologists from the Americleft Speech Project. In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function.

  1. Reasons to Doubt the Reliability of Eyewitness Memory: Commentary on Wixted, Mickes, and Fisher (2018).

    PubMed

    Wade, Kimberley A; Nash, Robert A; Lindsay, D Stephen

    2018-05-01

    Wixted, Mickes, and Fisher (this issue) take issue with the common trope that eyewitness memory is inherently unreliable. They draw on a large body of mock-crime research and a small number of field studies, which indicate that high-confidence eyewitness reports are usually accurate, at least when memory is uncontaminated and suitable interviewing procedures are used. We agree with the thrust of Wixted et al.'s argument and welcome their invitation to confront the mass underselling of eyewitnesses' potential reliability. Nevertheless, we argue that there is a comparable risk of overselling eyewitnesses' reliability. Wixted et al.'s reasoning implies that near-pristine conditions or uncontaminated memories are normative, but there are at least two good reasons to doubt this. First, psychological science does not yet offer a good understanding of how often and when eyewitness interviews might deviate from best practice in ways that compromise the accuracy of witnesses' reports. Second, witnesses may frequently be exposed to preinterview influences that could corrupt reports obtained in best-practice interviews.

  2. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  3. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  4. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  5. Accuracy and reliability of 3D stereophotogrammetry: A comparison to direct anthropometry and 2D photogrammetry.

    PubMed

    Dindaroğlu, Furkan; Kutlu, Pınar; Duran, Gökhan Serhat; Görgülü, Serkan; Aslan, Erhan

    2016-05-01

    To evaluate the accuracy of three-dimensional (3D) stereophotogrammetry by comparing it with the direct anthropometry and digital photogrammetry methods. The reliability of 3D stereophotogrammetry was also examined. Six profile and four frontal parameters were directly measured on the faces of 80 participants. The same measurements were repeated using two-dimensional (2D) photogrammetry and 3D stereophotogrammetry (3dMDflex System, 3dMD, Atlanta, Ga) to obtain images of the subjects. Another observer made the same measurements for images obtained with 3D stereophotogrammetry, and interobserver reproducibility was evaluated for 3D images. Both observers remeasured the 3D images 1 month later, and intraobserver reproducibility was evaluated. Statistical analysis was conducted using the paired samples t-test, intraclass correlation coefficient, and Bland-Altman limits of agreement. The highest mean difference was 0.30 mm between direct measurement and photogrammetry, 0.21 mm between direct measurement and 3D stereophotogrammetry, and 0.5 mm between photogrammetry and 3D stereophotogrammetry. The lowest agreement value was 0.965 in the Sn-Pro parameter between the photogrammetry and 3D stereophotogrammetry methods. Agreement between the two observers varied from 0.90 (Ch-Ch) to 0.99 (Sn-Me) in linear measurements. For intraobserver agreement, the highest difference between means was 0.33 for observer 1 and 1.42 mm for observer 2. Measurements obtained using 3D stereophotogrammetry indicate that it may be an accurate and reliable imaging method for use in orthodontics.

  6. Advancing methods for reliably assessing motivational interviewing fidelity using the Motivational Interviewing Skills Code

    PubMed Central

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.

    2014-01-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192

  7. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  8. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  9. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    NASA Astrophysics Data System (ADS)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  10. Interrater reliability and accuracy of clinicians and trained research assistants performing prospective data collection in emergency department patients with potential acute coronary syndrome.

    PubMed

    Cruz, Carlos O; Meshberg, Emily B; Shofer, Frances S; McCusker, Christine M; Chang, Anna Marie; Hollander, Judd E

    2009-07-01

    Clinical research requires high-quality data collection. Data collected at the emergency department evaluation is generally considered more precise than data collected through chart abstraction but is cumbersome and time consuming. We test whether trained research assistants without a medical background can obtain clinical research data as accurately as physicians. We hypothesize that they would be at least as accurate because they would not be distracted by clinical requirements. We conducted a prospective comparative study of 33 trained research assistants and 39 physicians (35 residents) to assess interrater reliability with respect to guideline-recommended clinical research data. Immediately after the research assistant and clinician evaluation, the data were compared by a tiebreaker third person who forced the patient to choose one of the 2 answers as the correct one when responses were discordant. Crude percentage agreement and interrater reliability were assessed (kappa statistic). One hundred forty-three patients were recruited (mean age 50.7 years; 47% female patients). Overall, the median agreement was 81% (interquartile range [IQR] 73% to 92%) and interrater reliability was fair (kappa value 0.36 [IQR 0.26 to 0.52]) but varied across categories of data: cardiac risk factors (median 86% [IQR 81% to 93%]; median 0.69 [IQR 0.62 to 0.83]), other cardiac history (median 93% [IQR 79% to 95%]; median 0.56 [IQR 0.29 to 0.77]), pain location (median 92% [IR 86% to 94%]; median 0.37 [IQR 0.25 to 0.29]), radiation (median 86% [IQR 85% to 87%]; median 0.37 [IQR 0.26 to 0.42]), quality (median 85% [IQR 75% to 94%]; median 0.29 [IQR 0.23 to 0.40]), and associated symptoms (median 74% [IQR 65% to 78%]; median 0.28 [IQR 0.20 to 0.40]). When discordant information was obtained, the research assistant was more often correct (median 64% [IQR 53% to 72%]). The relatively fair interrater reliability observed in our study is consistent with previous studies evaluating

  11. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  12. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  13. Reliability of intracerebral hemorrhage classification systems: A systematic review.

    PubMed

    Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm

    2016-08-01

    Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.

  14. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  15. Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs

    NASA Astrophysics Data System (ADS)

    Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian

    2011-02-01

    As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.

  16. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  17. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  18. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  19. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  20. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    NASA Astrophysics Data System (ADS)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  1. The Americleft Speech Project: A Training and Reliability Study

    PubMed Central

    Chapman, Kathy L.; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2017-01-01

    Objective To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. Design The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech–Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. Participants The participants were speech-language pathologists from the Americleft Speech Project. Results In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. Conclusion The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function. PMID:25531738

  2. Neural Networks Based Approach to Enhance Space Hardware Reliability

    NASA Technical Reports Server (NTRS)

    Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.

    2011-01-01

    This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.

  3. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  4. Theory of reliable systems. [systems analysis and design

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1973-01-01

    The analysis and design of reliable systems are discussed. The attributes of system reliability studied are fault tolerance, diagnosability, and reconfigurability. Objectives of the study include: to determine properties of system structure that are conducive to a particular attribute; to determine methods for obtaining reliable realizations of a given system; and to determine how properties of system behavior relate to the complexity of fault tolerant realizations. A list of 34 references is included.

  5. Accurate quasiparticle calculation of x-ray photoelectron spectra of solids

    NASA Astrophysics Data System (ADS)

    Aoki, Tsubasa; Ohno, Kaoru

    2018-05-01

    It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.

  6. Accurate quasiparticle calculation of x-ray photoelectron spectra of solids.

    PubMed

    Aoki, Tsubasa; Ohno, Kaoru

    2018-05-31

    It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.

  7. Geometric optimisation of an accurate cosine correcting optic fibre coupler for solar spectral measurement.

    PubMed

    Cahuantzi, Roberto; Buckley, Alastair

    2017-09-01

    Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.

  8. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    PubMed

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Reliability of Pain Measurements Using Computerized Cuff Algometry: A DoloCuff Reliability and Agreement Study.

    PubMed

    Kvistgaard Olsen, Jack; Fener, Dilay Kesgin; Waehrens, Eva Elisabet; Wulf Christensen, Anton; Jespersen, Anders; Danneskiold-Samsøe, Bente; Bartels, Else Marie

    2017-07-01

    Computerized pneumatic cuff pressure algometry (CPA) using the DoloCuff is a new method for pain assessment. Intra- and inter-rater reliabilities have not yet been established. Our aim was to examine the inter- and intrarater reliabilities of DoloCuff measures in healthy subjects. Twenty healthy subjects (ages 20 to 29 years) were assessed three times at 24-hour intervals by two trained raters. Inter-rater reliability was established based on the first and second assessments, whereas intrarater reliability was based on the second and third assessments. Subjects were randomized 1:1 to first assessment at either rater 1 or rater 2. The variables of interest were pressure pain threshold (PT), pressure pain tolerance (PTol), and temporal summation index (TSI). Reliability was estimated by a two-way mixed intraclass correlation coefficient (ICC) absolute agreement analysis. Reliability was considered excellent if ICC > 0.75, fair to good if 0.4 < ICC < 0.75, and poor if ICC < 0.4. Bias and random errors between raters and assessments were evaluated using 95% confidence interval (CI) and Bland-Altman plots. Inter-rater reliability for PT, PTol, and TSI was 0.88 (95% CI: 0.69 to 0.95), 0.86 (95% CI: 0.65 to 0.95), and 0.81 (95% CI: 0.42 to 0.94), respectively. The intrarater reliability for PT, PTol, and TSI was 0.81 (95% CI: 0.53 to 0.92), 0.89 (95% CI: 0.74 to 0.96), and 0.75 (95% CI: 0.28 to 0.91), respectively. Inter-rater reliability was excellent for PT, PTol, and TSI. Similarly, the intrarater reliability for PT and PTol was excellent, while borderline excellent/good for TSI. Therefore, the DoloCuff can be used to obtain reliable measures of pressure pain parameters in healthy subjects. © 2016 World Institute of Pain.

  10. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  11. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  12. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  13. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    PubMed

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  14. Isokinetic Strength and Endurance Tests used Pre- and Post-Spaceflight: Test-Retest Reliability

    NASA Technical Reports Server (NTRS)

    Laughlin, Mitzi S.; Lee, Stuart M. C.; Loehr, James A.; Amonette, William E.

    2009-01-01

    To assess changes in muscular strength and endurance after microgravity exposure, NASA measures isokinetic strength and endurance across multiple sessions before and after long-duration space flight. Accurate interpretation of pre- and post-flight measures depends upon the reliability of each measure. The purpose of this study was to evaluate the test-retest reliability of the NASA International Space Station (ISS) isokinetic protocol. Twenty-four healthy subjects (12 M/12 F, 32.0 +/- 5.6 years) volunteered to participate. Isokinetic knee, ankle, and trunk flexion and extension strength as well as endurance of the knee flexors and extensors were measured using a Cybex NORM isokinetic dynamometer. The first weekly session was considered a familiarization session. Data were collected and analyzed for weeks 2-4. Repeated measures analysis of variance (alpha=0.05) was used to identify weekly differences in isokinetic measures. Test-retest reliability was evaluated by intraclass correlation coefficients (ICC) (3,1). No significant differences were found between weeks in any of the strength measures and the reliability of the strength measures were all considered excellent (ICC greater than 0.9), except for concentric ankle dorsi-flexion (ICC=0.67). Although a significant difference was noted in weekly endurance measures of knee extension (p less than 0.01), the reliability of endurance measure by week were considered excellent for knee flexion (ICC=0.97) and knee extension (ICC=0.96). Except for concentric ankle dorsi-flexion, the isokinetic strength and endurance measures are highly reliable when following the NASA ISS protocol. This protocol should allow accurate interpretation isokinetic data even with a small number of crew members.

  15. Using an electronic prescribing system to ensure accurate medication lists in a large multidisciplinary medical group.

    PubMed

    Stock, Ron; Scott, Jim; Gurtel, Sharon

    2009-05-01

    Although medication safety has largely focused on reducing medication errors in hospitals, the scope of adverse drug events in the outpatient setting is immense. A fundamental problem occurs when a clinician lacks immediate access to an accurate list of the medications that a patient is taking. Since 2001, PeaceHealth Medical Group (PHMG), a multispecialty physician group, has been using an electronic prescribing system that includes medication-interaction warnings and allergy checks. Yet, most practitioners recognized the remaining potential for error, especially because there was no assurance regarding the accuracy of information on the electronic medical record (EMR)-generated medication list. PeaceHealth developed and implemented a standardized approach to (1) review and reconcile the medication list for every patient at each office visit and (2) report on the results obtained within the PHMG clinics. In 2005, PeaceHealth established the ambulatory medication reconciliation project to develop a reliable, efficient process for maintaining accurate patient medication lists. Each of PeaceHealth's five regions created a medication reconciliation task force to redesign its clinical practice, incorporating the systemwide aims and agreed-on key process components for every ambulatory visit. Implementation of the medication reconciliation process at the PHMG clinics resulted in a substantial increase in the number of accurate medication lists, with fewer discrepancies between what the patient is actually taking and what is recorded in the EMR. The PeaceHealth focus on patient safety, and particularly the reduction of medication errors, has involved a standardized approach for reviewing and reconciling medication lists for every patient visiting a physician office. The standardized processes can be replicated at other ambulatory clinics-whether or not electronic tools are available.

  16. Accuracy and reliability of self-reported weight and height in the Sister Study

    PubMed Central

    Lin, Cynthia J; DeRoo, Lisa A; Jacobs, Sara R; Sandler, Dale P

    2012-01-01

    Objective To assess accuracy and reliability of self-reported weight and height and identify factors associated with reporting accuracy. Design Analysis of self-reported and measured weight and height from participants in the Sister Study (2003–2009), a nationwide cohort of 50,884 women aged 35–74 in the United States with a sister with breast cancer. Setting Weight and height were reported via computer-assisted telephone interview (CATI) and self-administered questionnaires, and measured by examiners. Subjects Early enrollees in the Sister Study. There were 18,639 women available for the accuracy analyses and 13,316 for the reliability analyses. Results Using weighted kappa statistics, comparisons were made between CATI responses and examiner measures to assess accuracy and CATI and questionnaire responses to assess reliability. Polytomous logistic regression evaluated factors associated with over- or under-reporting. Compared to measured values, agreement was 96% for reported height (±1 inch; weighted kappa 0.84) and 67% for weight (±3 pounds; weighted kappa 0.92). Obese women [body mass index (BMI) ≥30 kg/m2)] were more likely than normal weight women to under-report weight by ≥5% and underweight women (BMI <18.5 kg/m2) were more likely to over-report. Among normal and overweight women (18.5 kgm2≤ BMI <30 kgm2), weight cycling and lifetime weight difference ≥50 pounds were associated with over-reporting. Conclusions U.S. women in the Sister Study were reasonably reliable and accurate in reporting weight and height. Women with normal-range BMI reported most accurately. Overweight and obese women and those with weight fluctuations were less accurate, but even among obese women, few under-reported their weight by >10%. PMID:22152926

  17. The Reliability and Stability of an Inferred Phylogenetic Tree from Empirical Data.

    PubMed

    Katsura, Yukako; Stanley, Craig E; Kumar, Sudhir; Nei, Masatoshi

    2017-03-01

    The reliability of a phylogenetic tree obtained from empirical data is usually measured by the bootstrap probability (Pb) of interior branches of the tree. If the bootstrap probability is high for most branches, the tree is considered to be reliable. If some interior branches show relatively low bootstrap probabilities, we are not sure that the inferred tree is really reliable. Here, we propose another quantity measuring the reliability of the tree called the stability of a subtree. This quantity refers to the probability of obtaining a subtree (Ps) of an inferred tree obtained. We then show that if the tree is to be reliable, both Pb and Ps must be high. We also show that Ps is given by a bootstrap probability of the subtree with the closest outgroup sequence, and computer program RESTA for computing the Pb and Ps values will be presented. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  18. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  19. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops.

    PubMed

    Bengochea-Guevara, José M; Andújar, Dionisio; Sanchez-Sardana, Francisco L; Cantuña, Karla; Ribeiro, Angela

    2017-12-24

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.

  20. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops

    PubMed Central

    Andújar, Dionisio; Sanchez-Sardana, Francisco L.; Cantuña, Karla

    2017-01-01

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, “on ground crop inspection” potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. “On ground monitoring” is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows. PMID:29295536

  1. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  2. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  3. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    PubMed

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  4. Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores

    PubMed Central

    Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson

    2017-01-01

    Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100

  5. Can reliable sage-grouse lek counts be obtained using aerial infrared technology

    USGS Publications Warehouse

    Gillette, Gifford L.; Coates, Peter S.; Petersen, Steven; Romero, John P.

    2013-01-01

    More effective methods for counting greater sage-grouse (Centrocercus urophasianus) are needed to better assess population trends through enumeration or location of new leks. We describe an aerial infrared technique for conducting sage-grouse lek counts and compare this method with conventional ground-based lek count methods. During the breeding period in 2010 and 2011, we surveyed leks from fixed-winged aircraft using cryogenically cooled mid-wave infrared cameras and surveyed the same leks on the same day from the ground following a standard lek count protocol. We did not detect significant differences in lek counts between surveying techniques. These findings suggest that using a cryogenically cooled mid-wave infrared camera from an aerial platform to conduct lek surveys is an effective alternative technique to conventional ground-based methods, but further research is needed. We discuss multiple advantages to aerial infrared surveys, including counting in remote areas, representing greater spatial variation, and increasing the number of counted leks per season. Aerial infrared lek counts may be a valuable wildlife management tool that releases time and resources for other conservation efforts. Opportunities exist for wildlife professionals to refine and apply aerial infrared techniques to wildlife monitoring programs because of the increasing reliability and affordability of this technology.

  6. Composite Stress Rupture: A New Reliability Model Based on Strength Decay

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2012-01-01

    A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures

  7. Benchmarking singlet and triplet excitation energies of molecular semiconductors for singlet fission: Tuning the amount of HF exchange and adjusting local correlation to obtain accurate functionals for singlet-triplet gaps

    NASA Astrophysics Data System (ADS)

    Brückner, Charlotte; Engels, Bernd

    2017-01-01

    Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.

  8. Effect of individual shades on reliability and validity of observers in colour matching.

    PubMed

    Lagouvardos, P E; Diamanti, H; Polyzois, G

    2004-06-01

    The effect of individual shades in shade guides, on the reliability and validity of measurements in a colour matching process is very important. Observer's agreement on shades and sensitivity/specificity of shades, can give us an estimate of shade's effect on observer's reliability and validity. In the present study, a group of 16 students, matched 15 shades of a Kulzer's guide and 10 human incisors to Kulzer's and/or Vita's shade tabs, in 4 different tests. The results showed shades I, B10, C40, A35 and A10 were those with the highest reliability and validity values. In conclusion, a) the matching process with shades of different materials was not accurate enough, b) some shades produce a more reliable and valid match than others and c) teeth are matched with relative difficulty.

  9. Reliability and validity: Part II.

    PubMed

    Davis, Debora Winders

    2004-01-01

    Determining measurement reliability and validity involves complex processes. There is usually room for argument about most instruments. It is important that the researcher clearly describes the processes upon which she made the decision to use a particular instrument, and presents the evidence available showing that the instrument is reliable and valid for the current purposes. In some cases, the researcher may need to conduct pilot studies to obtain evidence upon which to decide whether the instrument is valid for a new population or a different setting. In all cases, the researcher must present a clear and complete explanation for the choices, she has made regarding reliability and validity. The consumer must then judge the degree to which the researcher has provided adequate and theoretically sound rationale. Although I have tried to touch on most of the important concepts related to measurement reliability and validity, it is beyond the scope of this column to be exhaustive. There are textbooks devoted entirely to specific measurement issues if readers require more in-depth knowledge.

  10. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  11. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  12. Can Ultrasound Accurately Assess Ischiofemoral Space Dimensions? A Validation Study.

    PubMed

    Finnoff, Jonathan T; Johnson, Adam C; Hollman, John H

    2017-04-01

    Ischiofemoral impingement is a potential cause of hip and buttock pain. It is evaluated commonly with magnetic resonance imaging (MRI). To our knowledge, no study previously has evaluated the ability of ultrasound to measure the ischiofemoral space (IFS) dimensions reliably. To determine whether ultrasound could accurately measure the IFS dimensions when compared with the gold standard imaging modality of MRI. A methods comparison study. Sports medicine center within a tertiary-care institution. A total of 5 male and 5 female asymptomatic adult subjects (age mean = 29.2 years, range = 23-35 years; body mass index mean = 23.5, range = 19.5-26.6) were recruited to participate in the study. Subjects were secured in a prone position on a MRI table with their hips in a neutral position. Their IFS dimensions were then acquired in a randomized order using diagnostic ultrasound and MRI. The main outcome measurements were the IFS dimensions acquired with ultrasound and MRI. The mean IFS dimensions measured with ultrasound was 29.5 mm (standard deviation [SD] 4.99 mm, standard error mean 1.12 mm), whereas those obtained with MRI were 28.25 mm (SD 5.91 mm, standard error mean 1.32 mm). The mean difference between the ultrasound and MRI measurements was 1.25 mm, which was not statistically significant (SD 3.71 mm, standard error mean 3.71 mm, 95% confidence interval -0.49 mm to 2.98 mm, t 19 = 1.506, P = .15). The Bland-Altman analysis indicated that the 95% limits of agreement between the 2 measurement was -6.0 to 8.5 mm, indicating that there was no systematic bias between the ultrasound and MRI measurements. Our findings suggest that the IFS measurements obtained with ultrasound are very similar to those obtained with MRI. Therefore, when evaluating individuals with suspected ischiofemoral impingement, one could consider using ultrasound to measure their IFS dimensions. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier

  13. A study on the real-time reliability of on-board equipment of train control system

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  14. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations

    PubMed Central

    Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.

    2013-01-01

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537

  15. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations.

    PubMed

    Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E

    2013-04-18

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.

  16. A Reliable Method to Measure Lip Height Using Photogrammetry in Unilateral Cleft Lip Patients.

    PubMed

    van der Zeeuw, Frederique; Murabit, Amera; Volcano, Johnny; Torensma, Bart; Patel, Brijesh; Hay, Norman; Thorburn, Guy; Morris, Paul; Sommerlad, Brian; Gnarra, Maria; van der Horst, Chantal; Kangesu, Loshan

    2015-09-01

    There is still no reliable tool to determine the outcome of the repaired unilateral cleft lip (UCL). The aim of this study was therefore to develop an accurate, reliable tool to measure vertical lip height from photographs. The authors measured the vertical height of the cutaneous and vermilion parts of the lip in 72 anterior-posterior view photographs of 17 patients with repairs to a UCL. Points on the lip's white roll and vermillion were marked on both the cleft and the noncleft sides on each image. Two new concepts were tested. First, photographs were standardized using the horizontal (medial to lateral) eye fissure width (EFW) for calibration. Second, the authors tested the interpupillary line (IPL) and the alar base line (ABL) for their reliability as horizontal lines of reference. Measurements were taken by 2 independent researchers, at 2 different time points each. Overall 2304 data points were obtained and analyzed. Results showed that the method was very effective in measuring the height of the lip on the cleft side with the noncleft side. When using the IPL, inter- and intra-rater reliability was 0.99 to 1.0, with the ABL it varied from 0.91 to 0.99 with one exception at 0.84. The IPL was easier to define because in some subjects the overhanging nasal tip obscured the alar base and gave more consistent measurements possibly because the reconstructed alar base was sometimes indistinct. However, measurements from the IPL can only give the percentage difference between the left and right sides of the lip, whereas those from the ABL can also give exact measurements. Patient examples were given that show how the measurements correlate with clinical assessment. The authors propose this method of photogrammetry with the innovative use of the IPL as a reliable horizontal plane and use of the EFW for calibration as a useful and reliable tool to assess the outcome of UCL repair.

  17. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  18. Newly developed double neural network concept for reliable fast plasma position control

    NASA Astrophysics Data System (ADS)

    Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.

    2001-01-01

    Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.

  19. Flow dichroism as a reliable method to measure the hydrodynamic aspect ratio of gold nanoparticles.

    PubMed

    Reddy, Naveen Krishna; Pérez-Juste, Jorge; Pastoriza-Santos, Isabel; Lang, Peter R; Dhont, Jan K G; Liz-Marzán, Luis M; Vermant, Jan

    2011-06-28

    Particle shape plays an important role in controlling the optical, magnetic, and mechanical properties of nanoparticle suspensions as well as nanocomposites. However, characterizing the size, shape, and the associated polydispersity of nanoparticles is not straightforward. Electron microscopy provides an accurate measurement of the geometric properties, but sample preparation can be laborious, and to obtain statistically relevant data many particles need to be analyzed separately. Moreover, when the particles are suspended in a fluid, it is important to measure their hydrodynamic properties, as they determine aspects such as diffusion and the rheological behavior of suspensions. Methods that evaluate the dynamics of nanoparticles such as light scattering and rheo-optical methods accurately provide these hydrodynamic properties, but do necessitate a sufficient optical response. In the present work, three different methods for characterizing nonspherical gold nanoparticles are critically compared, especially taking into account the complex optical response of these particles. The different methods are evaluated in terms of their versatility to asses size, shape, and polydispersity. Among these, the rheo-optical technique is shown to be the most reliable method to obtain hydrodynamic aspect ratio and polydispersity for nonspherical gold nanoparticles for two reasons. First, the use of the evolution of the orientation angle makes effects of polydispersity less important. Second, the use of an external flow field gives a mathematically more robust relation between particle motion and aspect ratio, especially for particles with relatively small aspect ratios.

  20. Validity and reliability of the Diagnostic Adaptive Behaviour Scale.

    PubMed

    Tassé, M J; Schalock, R L; Balboni, G; Spreat, S; Navas, P

    2016-01-01

    The Diagnostic Adaptive Behaviour Scale (DABS) is a new standardised adaptive behaviour measure that provides information for evaluating limitations in adaptive behaviour for the purpose of determining a diagnosis of intellectual disability. This article presents validity evidence and reliability data for the DABS. Validity evidence was based on comparing DABS scores with scores obtained on the Vineland Adaptive Behaviour Scale, second edition. The stability of the test scores was measured using a test and retest, and inter-rater reliability was assessed by computing the inter-respondent concordance. The DABS convergent validity coefficients ranged from 0.70 to 0.84, while the test-retest reliability coefficients ranged from 0.78 to 0.95, and the inter-rater concordance as measured by intraclass correlation coefficients ranged from 0.61 to 0.87. All obtained validity and reliability indicators were strong and comparable with the validity and reliability coefficients of the most commonly used adaptive behaviour instruments. These results and the advantages of the DABS for clinician and researcher use are discussed. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  1. Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.

    2005-01-01

    An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  2. How reliable are Functional Movement Screening scores? A systematic review of rater reliability.

    PubMed

    Moran, Robert W; Schneiders, Anthony G; Major, Katherine M; Sullivan, S John

    2016-05-01

    Several physical assessment protocols to identify intrinsic risk factors for injury aetiology related to movement quality have been described. The Functional Movement Screen (FMS) is a standardised, field-expedient test battery intended to assess movement quality and has been used clinically in preparticipation screening and in sports injury research. To critically appraise and summarise research investigating the reliability of scores obtained using the FMS battery. Systematic literature review. Systematic search of Google Scholar, Scopus (including ScienceDirect and PubMed), EBSCO (including Academic Search Complete, AMED, CINAHL, Health Source: Nursing/Academic Edition), MEDLINE and SPORTDiscus. Studies meeting eligibility criteria were assessed by 2 reviewers for risk of bias using the Quality Appraisal of Reliability Studies checklist. Overall quality of evidence was determined using van Tulder's levels of evidence approach. 12 studies were appraised. Overall, there was a 'moderate' level of evidence in favour of 'acceptable' (intraclass correlation coefficient ≥0.6) inter-rater and intra-rater reliability for composite scores derived from live scoring. For inter-rater reliability of composite scores derived from video recordings there was 'conflicting' evidence, and 'limited' evidence for intra-rater reliability. For inter-rater reliability based on live scoring of individual subtests there was 'moderate' evidence of 'acceptable' reliability (κ≥0.4) for 4 subtests (Deep Squat, Shoulder Mobility, Active Straight-leg Raise, Trunk Stability Push-up) and 'conflicting' evidence for the remaining 3 (Hurdle Step, In-line Lunge, Rotary Stability). This review found 'moderate' evidence that raters can achieve acceptable levels of inter-rater and intra-rater reliability of composite FMS scores when using live ratings. Overall, there were few high-quality studies, and the quality of several studies was impacted by poor study reporting particularly in relation to

  3. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  4. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  5. Reliability of abstracting performance measures: results of the cardiac rehabilitation referral and reliability (CR3) project.

    PubMed

    Thomas, Randal J; Chiu, Jensen S; Goff, David C; King, Marjorie; Lahr, Brian; Lichtman, Steven W; Lui, Karen; Pack, Quinn R; Shahriary, Melanie

    2014-01-01

    Assessment of the reliability of performance measure (PM) abstraction is an important step in PM validation. Reliability has not been previously assessed for abstracting PMs for the referral of patients to cardiac rehabilitation (CR) and secondary prevention (SP) programs. To help validate these PMs, we carried out a multicenter assessment of their reliability. Hospitals and clinical practices from around the United States were invited to participate in the Cardiac Rehabilitation Referral Reliability (CR3) Project. Twenty-nine hospitals and 23 outpatient centers expressed interest in participating. Seven hospitals and 6 outpatient centers met participation criteria and submitted completed data. Site coordinators identified 35 patients whose charts were reviewed by 2 site abstractors twice, 1 week apart. Percent agreement and the Cohen κ statistic were used to describe intra- and interabstractor reliability for patient eligibility for CR/SP, patient exceptions for CR/SP referral, and documented referral to CR/SP. Results were obtained from within-site data, as well as from pooled data of all inpatient and all outpatient sites. We found that intra-abstractor reliability reflected excellent repeatability (≥ 90% agreement; κ ≥ 0.75) for ratings of CR/SP eligibility, exceptions, and referral, both from pooled and site-specific analyses of inpatient and outpatient data. Similarly, the interabstractor agreement from pooled analysis ranged from good to excellent for the 3 items, although with slightly lower measures of reliability. Abstraction of PMs for CR/SP referral has high reliability, supporting the use of these PMs in quality improvement initiatives aimed at increasing CR/SP delivery to patients with cardiovascular disease.

  6. Constructing the "Best" Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.; Kleinhammer, R. K.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  7. Constructing the Best Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    Kleinhammer, R. K.; Kahn, J. C.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  8. Intra-instrument reliability of 4 goniometers.

    PubMed

    Pringle, R Kevin

    2003-01-01

    Cervical spine ROM movements taken accurately with reliable measuring devices are important in outcome measures as well as in measuring disability. To compare the active cervical spine ROM in healthy young adult population using 4 different goniometers. Subjects were tested during active cervical spine ROM. The devices were a single hinge inclinometer, single bubble carpenter's inclinometer, dual bubble goniometers and Cybex EDI 320 electrical inclinometer. All subjects were tested for rotational limits along each of the orthogonal axes of movement. There are 3 trials for each movement direction, except rotation was not measured with the Cybex as per manual suggestions. The subjects were randomly assigned to the sequence of devices. Twenty-seven student volunteers (19 men and 8 women) were tested. Ages ranged from 21 to 41, mean age of 27.6 years of age. Active cervical spine ROM trials for each measurement was used to calculate mean and standard deviation. An overall analysis of variance (ANOVA) and Bonferroni adjusted T-test were determined in order to calculate reliability and significance. The cost of the instruments were not used in determining reliability or significance. The single hinge inclinometer was found to be a reliable measure but not likely valid. The Cybex EDI 320 was found to be the best measuring device; however, the 2 instruments whose cost were in-between the single hinge inclinometer and the electrical goniometer were just as reliable as the more expensive device. The AMA Guides of Impairment were used as the normative data to compare these devices. Since the devices could measure reliably, whether expensive or more cost effective for students they would likely make adequate devices for training students on the methods for measuring ROM. There is previous data to suggest that older populations have gender differences and age differences with ROM. This study could not measure that and would make a useful follow-up study.

  9. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  10. Reliability and Agreement in Student Ratings of the Class Environment

    ERIC Educational Resources Information Center

    Nelson, Peter M.; Christ, Theodore J.

    2016-01-01

    The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were…

  11. Accurate determinations of alpha(s) from realistic lattice QCD.

    PubMed

    Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J

    2005-07-29

    We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).

  12. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  13. Estimation of reliability and dynamic property for polymeric material at high strain rate using SHPB technique and probability theory

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin

    2008-11-01

    A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.

  14. Reliability approach to rotating-component design. [fatigue life and stress concentration

    NASA Technical Reports Server (NTRS)

    Kececioglu, D. B.; Lalli, V. R.

    1975-01-01

    A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.

  15. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  16. Reliability-based optimization of an active vibration controller using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Saraygord Afshari, Sajad; Pourtakdoust, Seid H.

    2017-04-01

    Many modern industrialized systems such as aircrafts, rotating turbines, satellite booms, etc. cannot perform their desired tasks accurately if their uninhibited structural vibrations are not controlled properly. Structural health monitoring and online reliability calculations are emerging new means to handle system imposed uncertainties. As stochastic forcing are unavoidable, in most engineering systems, it is often needed to take them into the account for the control design process. In this research, smart material technology is utilized for structural health monitoring and control in order to keep the system in a reliable performance range. In this regard, a reliability-based cost function is assigned for both controller gain optimization as well as sensor placement. The proposed scheme is implemented and verified for a wing section. Comparison of results for the frequency responses is considered to show potential applicability of the presented technique.

  17. Reliability of reflectance measures in passive filters

    NASA Astrophysics Data System (ADS)

    Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.

    2014-08-01

    Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.

  18. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  19. Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.

    2003-01-01

    The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.

  20. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  1. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  3. Bootstrap study of genome-enabled prediction reliabilities using haplotype blocks across Nordic Red cattle breeds.

    PubMed

    Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D

    2015-10-01

    This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  5. Frame-of-Reference Training: Establishing Reliable Assessment of Teaching Effectiveness.

    PubMed

    Newman, Lori R; Brodsky, Dara; Jones, Richard N; Schwartzstein, Richard M; Atkins, Katharyn Meredith; Roberts, David H

    2016-01-01

    Frame-of-reference (FOR) training has been used successfully to teach faculty how to produce accurate and reliable workplace-based ratings when assessing a performance. We engaged 21 Harvard Medical School faculty members in our pilot and implementation studies to determine the effectiveness of using FOR training to assess health professionals' teaching performances. All faculty were novices at rating their peers' teaching effectiveness. Before FOR training, we asked participants to evaluate a recorded lecture using a criterion-based peer assessment of medical lecturing instrument. At the start of training, we discussed the instrument and emphasized its precise behavioral standards. During training, participants practiced rating lectures and received immediate feedback on how well they categorized and scored performances as compared with expert-derived scores of the same lectures. At the conclusion of the training, we asked participants to rate a post-training recorded lecture to determine agreement with the experts' scores. Participants and experts had greater rating agreement for the post-training lecture compared with the pretraining lecture. Through this investigation, we determined that FOR training is a feasible method to teach faculty how to accurately and reliably assess medical lectures. Medical school instructors and continuing education presenters should have the opportunity to be observed and receive feedback from trained peer observers. Our results show that it is possible to use FOR rater training to teach peer observers how to accurately rate medical lectures. The process is time efficient and offers the prospect for assessment and feedback beyond traditional learner evaluation of instruction.

  6. An effective method to accurately calculate the phase space factors for β - β - decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. Here, we present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  7. Toward accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  8. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.

  9. Reliability of Space-Shuttle Pressure Vessels with Random Batch Effects

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Kulkarni, Pandurang M.

    2000-01-01

    In this article we revisit the problem of estimating the joint reliability against failure by stress rupture of a group of fiber-wrapped pressure vessels used on Space-Shuttle missions. The available test data were obtained from an experiment conducted at the U.S. Department of Energy Lawrence Livermore Laboratory (LLL) in which scaled-down vessels were subjected to life testing at four accelerated levels of pressure. We estimate the reliability assuming that both the Shuttle and LLL vessels were chosen at random in a two-stage process from an infinite population with spools of fiber as the primary sampling unit. Two main objectives of this work are: (1) to obtain practical estimates of reliability taking into account random spool effects and (2) to obtain a realistic assessment of estimation accuracy under the random model. Here, reliability is calculated in terms of a 'system' of 22 fiber-wrapped pressure vessels, taking into account typical pressures and exposure times experienced by Shuttle vessels. Comparisons are made with previous studies. The main conclusion of this study is that, although point estimates of reliability are still in the 'comfort zone,' it is advisable to plan for replacement of the pressure vessels well before the expected Lifetime of 100 missions per Shuttle Orbiter. Under a random-spool model, there is simply not enough information in the LLL data to provide reasonable assurance that such replacement would not be necessary.

  10. Confirmatory Factor Analysis and Test-Retest Reliability of the Alcohol and Drug Confrontation Scale (ADCS)

    PubMed Central

    Polcin, Douglas L.; Galloway, Gantt P.; Bond, Jason; Korcha, Rachael; Greenfield, Thomas K.

    2008-01-01

    The addiction field lacks an accepted definition and reliable measure of confrontation. The Alcohol and Drug Confrontation Scale (ADCS) defines confrontation as warnings about the potential consequences of substance use. To assess psychometric properties, 323 individual entering recovery houses in U.S. urban and suburban areas were interviewed between 2003 and 2005 (20% women, 68% white). Analyses included test-retest reliability, confirmatory factor analysis, and measures of internal consistency. Findings support the ADCS as a reliable way of assessing two factors: Internal Support and External intensity. Confrontation was experienced as supportive, accurate and helpful. Additional studies should assess confrontation in different contexts. PMID:20686635

  11. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  12. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  13. Reliability and validity of quantifying absolute muscle hardness using ultrasound elastography.

    PubMed

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young's moduli of seven tissue-mimicking materials (in vitro; Young's modulus range, 20-80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young's modulus ratio of two reference materials, one hard and one soft (Young's moduli of 7 and 30 kPa, respectively), the Young's moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young's moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young's moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified.

  14. Quantitative Phase Microscopy for Accurate Characterization of Microlens Arrays

    NASA Astrophysics Data System (ADS)

    Grilli, Simonetta; Miccio, Lisa; Merola, Francesco; Finizio, Andrea; Paturzo, Melania; Coppola, Sara; Vespini, Veronica; Ferraro, Pietro

    Microlens arrays are of fundamental importance in a wide variety of applications in optics and photonics. This chapter deals with an accurate digital holography-based characterization of both liquid and polymeric microlenses fabricated by an innovative pyro-electrowetting process. The actuation of liquid and polymeric films is obtained through the use of pyroelectric charges generated into polar dielectric lithium niobate crystals.

  15. Reliability and validity of the symptoms of major depressive illness.

    PubMed

    Mazure, C; Nelson, J C; Price, L H

    1986-05-01

    In two consecutive studies, we examined the interrater reliability and then the concurrent validity of interview ratings for individual symptoms of major depressive illness. The concurrent validity of symptoms was determined by assessing the degree to which symptoms observed or reported during an interview were observed in daily behavior. Results indicated that most signs and symptoms of major depression and melancholia can be reliably rated by clinicians during a semistructured interview. Ratings of observable symptoms (signs) assessed during the interview were valid indicators of dysfunction observed in daily behavior. Several but not all ratings based on patient report of symptoms were at variance with observation. These discordant patient-reported symptoms may have value as subjective reports but were not accurate descriptions of observed dysfunction.

  16. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  17. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  18. Dog cloning with in vivo matured oocytes obtained using electric chemiluminescence immunoassay-predicted ovulation method.

    PubMed

    Lee, Seunghoon; Zhao, Minghui; No, Jingu; Nam, Yoonseok; Im, Gi-Sun; Hur, Tai-Young

    2017-01-01

    Radioactive immunoassay (RIA) is a traditional serum hormone assay method, but the application of the method in reproductive studies is limited by the associated radioactivity. The aim of present study was to evaluate the reliability of RIA and to compare its canine serum progesterone concentration determination accuracy to that of the electric chemiluminescence immunoassay (ECLI). In vivo matured oocytes were utilized for canine somatic cell nuclear transfer (SCNT), and serum progesterone levels were assessed to accurately determine ovulation and oocyte maturation. Canine serum progesterone concentrations during both proestrus and estrus were analyzed by RIA and ECLI to determine the ovulation day. Although both methods detected similar progesterone levels before ovulation, the mean progesterone concentration determined using ECLI was significantly higher than of RIA three days before ovulation. Following ovulation, oocytes were collected by surgery, and a lower percentage of mature oocytes were observed using ECLI (39%) as compared to RIA (67%) if 4-8ng/ml of progesterone were used for determination of ovulation. A high percentage of mature oocytes was observed using ECLI when 6-15 ng/mL of progesterone was used for ovulation determination. To determine whether ECLI could be used for canine cloning, six canines were selected as oocyte donors, and two puppies were obtained after SCNT and embryo transfer. In conclusion, compared to the traditional RIA method, the ECLI method is a safe and reliable method for canine cloning.

  19. Dog cloning with in vivo matured oocytes obtained using electric chemiluminescence immunoassay-predicted ovulation method

    PubMed Central

    No, Jingu; Nam, Yoonseok; Im, Gi-Sun; Hur, Tai-Young

    2017-01-01

    Radioactive immunoassay (RIA) is a traditional serum hormone assay method, but the application of the method in reproductive studies is limited by the associated radioactivity. The aim of present study was to evaluate the reliability of RIA and to compare its canine serum progesterone concentration determination accuracy to that of the electric chemiluminescence immunoassay (ECLI). In vivo matured oocytes were utilized for canine somatic cell nuclear transfer (SCNT), and serum progesterone levels were assessed to accurately determine ovulation and oocyte maturation. Canine serum progesterone concentrations during both proestrus and estrus were analyzed by RIA and ECLI to determine the ovulation day. Although both methods detected similar progesterone levels before ovulation, the mean progesterone concentration determined using ECLI was significantly higher than of RIA three days before ovulation. Following ovulation, oocytes were collected by surgery, and a lower percentage of mature oocytes were observed using ECLI (39%) as compared to RIA (67%) if 4-8ng/ml of progesterone were used for determination of ovulation. A high percentage of mature oocytes was observed using ECLI when 6–15 ng/mL of progesterone was used for ovulation determination. To determine whether ECLI could be used for canine cloning, six canines were selected as oocyte donors, and two puppies were obtained after SCNT and embryo transfer. In conclusion, compared to the traditional RIA method, the ECLI method is a safe and reliable method for canine cloning. PMID:28288197

  20. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  1. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  2. Accurate determination of the binding energy of the formic acid dimer: The importance of geometry relaxation

    NASA Astrophysics Data System (ADS)

    Kalescky, Robert; Kraka, Elfi; Cremer, Dieter

    2014-02-01

    The formic acid dimer in its C2h-symmetrical cyclic form is stabilized by two equivalent H-bonds. The currently accepted interaction energy is 18.75 kcal/mol whereas the experimental binding energy D0 value is only 14.22 ±0.12 kcal/mol [F. Kollipost, R. W. Larsen, A. V. Domanskaya, M. Nörenberg, and M. A. Suhm, J. Chem. Phys. 136, 151101 (2012)]. Calculation of the binding energies De and D0 at the CCSD(T) (Coupled Cluster with Single and Double excitations and perturbative Triple excitations)/CBS (Complete Basis Set) level of theory, utilizing CCSD(T)/CBS geometries and the frequencies of the dimer and monomer, reveals that there is a 3.2 kcal/mol difference between interaction energy and binding energy De, which results from (i) not relaxing the geometry of the monomers upon dissociation of the dimer and (ii) approximating CCSD(T) correlation effects with MP2. The most accurate CCSD(T)/CBS values obtained in this work are De = 15.55 and D0 = 14.32 kcal/mol where the latter binding energy differs from the experimental value by 0.1 kcal/mol. The necessity of employing augmented VQZ and VPZ calculations and relaxing monomer geometries of H-bonded complexes upon dissociation to obtain reliable binding energies is emphasized.

  3. Accurate measurement of dispersion data through short and narrow tubes used in very high-pressure liquid chromatography.

    PubMed

    Gritti, Fabrice; McDonald, Thomas; Gilar, Martin

    2015-09-04

    An original method is proposed for the accurate and reproducible measurement of the time-based dispersion properties of short L< 50cm and narrow rc< 50μm tubes at mobile phase flow rates typically used in very high-pressure liquid chromatography (vHPLC). Such tubes are used to minimize sample dispersion in vHPLC; however, their dispersion characteristics cannot be accurately measured at such flow rates due to system dispersion contribution of vHPLC injector and detector. It is shown that using longer and wider tubes (>10μL) enables a reliable measurement of the dispersion data. We confirmed that the dimensionless plot of the reduced dispersion coefficient versus the reduced linear velocity (Peclet number) depends on the aspect ratio, L/rc, of the tube, and unexpectedly also on the diffusion coefficient of the analyte. This dimensionless plot could be easily obtained for a large volume tube, which has the same aspect ratio as that of the short and narrow tube, and for the same diffusion coefficient. The dispersion data for the small volume tube are then directly extrapolated from this plot. For instance, it is found that the maximum volume variances of 75μm×30.5cm and 100μm×30.5cm prototype finger-tightened connecting tubes are 0.10 and 0.30μL(2), respectively, with an accuracy of a few percent and a precision smaller than seven percent. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Reliability and validity of soft copy images based on flat-panel detector in pneumoconiosis classification: comparison with the analog radiographs.

    PubMed

    Lee, Won-Jeong; Choi, Byung-Soon

    2013-06-01

    The aim of this study was to evaluate the reliability and validity of soft copy images based on flat-panel detector of digital radiography (DR-FPD soft copy images) compared to analog radiographs (ARs) in pneumoconiosis classification and diagnosis. DR-FPD soft copy images and ARs from 349 subjects were independently read by four-experienced readers according to the International Labor Organization 2000 guidelines. DR-FPD soft copy images were used to obtain consensus reading (CR) by all readers as the gold standard. Reliability and validity were evaluated by a κ and receiver operating characteristic analysis, respectively. In small opacity, overall interreader agreement of DR-FPD soft copy images was significantly higher than that of ARs, but it was not significantly different in large opacity and costophrenic angle obliteration. In small opacity, agreement of DR-FPD soft copy images with CR was significantly higher than that of ARs with CR. It was also higher than that of ARs with CR in pleural plaque and thickening. Receiver operating characteristic areas were not different significantly between DR-FPD soft copy images and ARs. DR-FPD soft copy images showed accurate and reliable results in pneumoconiosis classification and diagnosis compared to ARs. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  5. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  6. Simplest chronoscope. III. Further comparisons between reaction times obtained by meterstick versus machine.

    PubMed

    Montare, Alberto

    2013-06-01

    The three classical Donders' reaction time (RT) tasks (simple, choice, and discriminative RTs) were employed to compare reaction time scores from college students obtained by use of Montare's simplest chronoscope (meterstick) methodology to scores obtained by use of a digital-readout multi-choice reaction timer (machine). Five hypotheses were tested. Simple RT, choice RT, and discriminative RT were faster when obtained by meterstick than by machine. The meterstick method showed higher reliability than the machine method and was less variable. The meterstick method of the simplest chronoscope may help to alleviate the longstanding problems of low reliability and high variability of reaction time performances; while at the same time producing faster performance on Donders' simple, choice and discriminative RT tasks than the machine method.

  7. Inter-Rater Reliability of Total Body Score-A Scale for Quantification of Corpse Decomposition.

    PubMed

    Nawrocka, Marta; Frątczak, Katarzyna; Matuszewski, Szymon

    2016-05-01

    The degree of body decomposition can be quantified using Total Body Score (TBS), a scale frequently used in taphonomic or entomological studies of decomposition. Here, the inter-rater reliability of the scale is analyzed. The study was made on 120 laymen, which were trained in the use of the scale. Participants scored decomposition of pig carcasses from photographs. It was found that the scale, when used by different people, gives homogeneous results irrespective of the user qualifications (the Krippendorff's alfa for all participants was 0.818). The study also indicated that carcasses in advanced decomposition receive significantly less accurate scores. Moreover, it was found that scores for cadavers in mosaic decomposition (i.e., representing signs of at least two stages of decomposition) are less accurate. These results demonstrate that the scale may be regarded as inter-rater reliable. Some propositions for refinement of the scale were also discussed. © 2016 American Academy of Forensic Sciences.

  8. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  9. Inter- and Intrarater Reliability Using Different Software Versions of E4D Compare in Dental Education.

    PubMed

    Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W

    2015-06-01

    The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.

  10. Nanoscale deformation measurements for reliability assessment of material interfaces

    NASA Astrophysics Data System (ADS)

    Keller, Jürgen; Gollhardt, Astrid; Vogel, Dietmar; Michel, Bernd

    2006-03-01

    With the development and application of micro/nano electronic mechanical systems (MEMS, NEMS) for a variety of market segments new reliability issues will arise. The understanding of material interfaces is the key for a successful design for reliability of MEMS/NEMS and sensor systems. Furthermore in the field of BIOMEMS newly developed advanced materials and well known engineering materials are combined despite of fully developed reliability concepts for such devices and components. In addition the increasing interface-to volume ratio in highly integrated systems and nanoparticle filled materials are challenges for experimental reliability evaluation. New strategies for reliability assessment on the submicron scale are essential to fulfil the needs of future devices. In this paper a nanoscale resolution experimental method for the measurement of thermo-mechanical deformation at material interfaces is introduced. The determination of displacement fields is based on scanning probe microscopy (SPM) data. In-situ SPM scans of the analyzed object (i.e. material interface) are carried out at different thermo-mechanical load states. The obtained images are compared by grayscale cross correlation algorithms. This allows the tracking of local image patterns of the analyzed surface structure. The measurement results are full-field displacement fields with nanometer resolution. With the obtained data the mixed mode type of loading at material interfaces can be analyzed with highest resolution for future needs in micro system and nanotechnology.

  11. The Rorschach Perceptual-Thinking Index (PTI): An Examination of Reliability, Validity, and Diagnostic Efficiency

    ERIC Educational Resources Information Center

    Hilsenroth, Mark J.; Eudell-Simmons, Erin M.; DeFife, Jared A.; Charnas, Jocelyn W.

    2007-01-01

    This study investigates the reliability, validity, and diagnostic efficiency of the Rorschach Perceptual-Thinking Index (PTI) in relation to the accurate identification of psychotic disorder (PTD) patients. The PTI is a revision of the Rorschach Schizophrenia Index (SCZI), designed to achieve several criteria, including an increase in the…

  12. Can autism be diagnosed accurately in children under 3 years?

    PubMed

    Stone, W L; Lee, E B; Ashford, L; Brissie, J; Hepburn, S L; Coonrod, E E; Weiss, B H

    1999-02-01

    This study investigated the reliability and stability of an autism diagnosis in children under 3 years of age who received independent diagnostic evaluations from two clinicians during two consecutive yearly evaluations. Strong evidence for the reliability and stability of the diagnosis was obtained. Diagnostic agreement between clinicians was higher for the broader discrimination of autism spectrum vs. no autism spectrum than for the more specific discrimination of autism vs. PDD-NOS. The diagnosis of autism at age 2 was more stable than the diagnosis of PDD-NOS at the same age. Social deficits and delays in spoken language were the most prominent DSM-IV characteristics evidenced by very young children with autism.

  13. Accurate beacon positioning method for satellite-to-ground optical communication.

    PubMed

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  14. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  15. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  16. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  17. Reliability and Validity of Quantifying Absolute Muscle Hardness Using Ultrasound Elastography

    PubMed Central

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young’s moduli of seven tissue-mimicking materials (in vitro; Young’s modulus range, 20–80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young’s modulus ratio of two reference materials, one hard and one soft (Young’s moduli of 7 and 30 kPa, respectively), the Young’s moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young’s moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young’s moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified. PMID:23029231

  18. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  19. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2017-11-05

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  20. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  1. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    PubMed

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  2. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  3. Reliability, Validity and Usefulness of 30–15 Intermittent Fitness Test in Female Soccer Players

    PubMed Central

    Čović, Nedim; Jelešković, Eldin; Alić, Haris; Rađo, Izet; Kafedžić, Erduan; Sporiš, Goran; McMaster, Daniel T.; Milanović, Zoran

    2016-01-01

    PURPOSE: The aim of this study was to examine the reliability, validity and usefulness of the 30–15IFT in competitive female soccer players. METHODS: Seventeen elite female soccer players participated in the study. A within subject test-retest study design was utilized to assess the reliability of the 30–15 intermittent fitness test (IFT). Seven days prior to 30–15IFT, subjects performed a continuous aerobic running test (CT) under laboratory conditions to assess the criterion validity of the 30–15IFT. End running velocity (VCT and VIFT), peak heart rate (HRpeak) and maximal oxygen consumption (VO2max) were collected and/or estimated for both tests. RESULTS: VIFT (ICC = 0.91; CV = 1.8%), HRpeak (ICC = 0.94; CV = 1.2%), and VO2max (ICC = 0.94; CV = 1.6%) obtained from the 30–15IFT were all deemed highly reliable (p > 0.05). Pearson product moment correlations between the CT and 30–15IFT for VO2max, HRpeak and end running velocity were large (r = 0.67, p = 0.013), very large (r = 0.77, p = 0.02) and large (r = 0.57, p = 0.042), respectively. CONCLUSION: Current findings suggest that the 30–15IFT is a valid and reliable intermittent aerobic fitness test of elite female soccer players. The findings have also provided practitioners with evidence to support the accurate detection of meaningful individual changes in VIFT of 0.5 km/h (1 stage) and HRpeak of 2 bpm. This information may assist coaches in monitoring “real” aerobic fitness changes to better inform training of female intermittent team sport athletes. Lastly, coaches could use the 30–15IFT as a practical alternative to laboratory based assessments to assess and monitor intermittent aerobic fitness changes in their athletes. PMID:27909408

  4. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  5. Problems in obtaining perfect images by single-particle electron cryomicroscopy of biological structures in amorphous ice.

    PubMed

    Henderson, Richard; McMullan, Greg

    2013-02-01

    Theoretical considerations together with simulations of single-particle electron cryomicroscopy images of biological assemblies in ice demonstrate that atomic structures should be obtainable from images of a few thousand asymmetric units, provided the molecular weight of the whole assembly being studied is greater than the minimum needed for accurate position and orientation determination. However, with present methods of specimen preparation and current microscope and detector technologies, many more particles are needed, and the alignment of smaller assemblies is difficult or impossible. Only larger structures, with enough signal to allow good orientation determination and with enough images to allow averaging of many hundreds of thousands or even millions of asymmetric units, have successfully produced high-resolution maps. In this review, we compare the contrast of experimental electron cryomicroscopy images of two smaller molecular assemblies, namely apoferritin and beta-galactosidase, with that expected from perfect simulated images calculated from their known X-ray structures. We show that the contrast and signal-to-noise ratio of experimental images still require significant improvement before it will be possible to realize the full potential of single-particle electron cryomicroscopy. In particular, although reasonably good orientations can be obtained for beta-galactosidase, we have been unable to obtain reliable orientation determination from experimental images of apoferritin. Simulations suggest that at least 2-fold improvement of the contrast in experimental images at ~10 Å resolution is needed and should be possible.

  6. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  7. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  8. Can emergency physicians accurately and reliably assess acute vertigo in the emergency department?

    PubMed

    Vanni, Simone; Nazerian, Peiman; Casati, Carlotta; Moroni, Federico; Risso, Michele; Ottaviani, Maddalena; Pecci, Rudi; Pepe, Giuseppe; Vannucchi, Paolo; Grifoni, Stefano

    2015-04-01

    To validate a clinical diagnostic tool, used by emergency physicians (EPs), to diagnose the central cause of patients presenting with vertigo, and to determine interrater reliability of this tool. A convenience sample of adult patients presenting to a single academic ED with isolated vertigo (i.e. vertigo without other neurological deficits) was prospectively evaluated with STANDING (SponTAneousNystagmus, Direction, head Impulse test, standiNG) by five trained EPs. The first step focused on the presence of spontaneous nystagmus, the second on the direction of nystagmus, the third on head impulse test and the fourth on gait. The local standard practice, senior audiologist evaluation corroborated by neuroimaging when deemed appropriate, was considered the reference standard. Sensitivity and specificity of STANDING were calculated. On the first 30 patients, inter-observer agreement among EPs was also assessed. Five EPs with limited experience in nystagmus assessment volunteered to participate in the present study enrolling 98 patients. Their average evaluation time was 9.9 ± 2.8 min (range 6-17). Central acute vertigo was suspected in 16 (16.3%) patients. There were 13 true positives, three false positives, 81 true negatives and one false negative, with a high sensitivity (92.9%, 95% CI 70-100%) and specificity (96.4%, 95% CI 93-38%) for central acute vertigo according to senior audiologist evaluation. The Cohen's kappas of the first, second, third and fourth steps of the STANDING were 0.86, 0.93, 0.73 and 0.78, respectively. The whole test showed a good inter-observer agreement (k = 0.76, 95% CI 0.45-1). In the hands of EPs, STANDING showed a good inter-observer agreement and accuracy validated against the local standard of care. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  9. Is Cancer Information Exchanged on Social Media Scientifically Accurate?

    PubMed

    Gage-Bouchard, Elizabeth A; LaValley, Susan; Warunek, Molli; Beaupin, Lynda Kwon; Mollica, Michelle

    2017-07-19

    Cancer patients and their caregivers are increasingly using social media as a platform to share cancer experiences, connect with support, and exchange cancer-related information. Yet, little is known about the nature and scientific accuracy of cancer-related information exchanged on social media. We conducted a content analysis of 12 months of data from 18 publically available Facebook Pages hosted by parents of children with acute lymphoblastic leukemia (N = 15,852 posts) and extracted all exchanges of medically-oriented cancer information. We systematically coded for themes in the nature of cancer-related information exchanged on personal Facebook Pages and two oncology experts independently evaluated the scientific accuracy of each post. Of the 15,852 total posts, 171 posts contained medically-oriented cancer information. The most frequent type of cancer information exchanged was information related to treatment protocols and health services use (35%) followed by information related to side effects and late effects (26%), medication (16%), medical caregiving strategies (13%), alternative and complementary therapies (8%), and other (2%). Overall, 67% of all cancer information exchanged was deemed medically/scientifically accurate, 19% was not medically/scientifically accurate, and 14% described unproven treatment modalities. These findings highlight the potential utility of social media as a cancer-related resource, but also indicate that providers should focus on recommending reliable, evidence-based sources to patients and caregivers.

  10. Reliability of self-reported antisocial personality disorder symptoms among substance abusers.

    PubMed

    Cottler, L B; Compton, W M; Ridenour, T A; Ben Abdallah, A; Gallagher, T

    1998-02-01

    It is estimated that from 20 to 60% of substance abusers meet criteria for Antisocial Personality Disorder (APD). An accurate and reliable diagnosis is important because persons meeting criteria for APD, by the nature of their disorder, are less likely to change behaviors and more likely to relapse to both substance abuse and high risk behaviors. To understand more about the reliability of the disorder and symptoms of APD, the Diagnostic Interview Schedule Version III-R (DIS) was administered to 453 substance abusers ascertained from treatment programs and from the general population (St Louis Epidemiological Catchment Area (ECA) follow-up study). Estimates of the 1 week, test-retest reliability for the childhood conduct disorder criterion, the adult antisocial behavior criterion, and APD diagnosis fell in the good agreement range, as measured by kappa. The internal consistency of these DIS symptoms was adequate to acceptable. Individual DIS criteria designed to measure childhood conduct disorder ranged from fair to good for most items; reliability was slightly higher for the adult antisocial behavior symptom items. Finally, self-reported 'liars' were no more unreliable in their reports of their behaviors than 'non-liars'.

  11. Reliability and Validity of Ten Consumer Activity Trackers Depend on Walking Speed.

    PubMed

    Fokkema, Tryntsje; Kooiman, Thea J M; Krijnen, Wim P; VAN DER Schans, Cees P; DE Groot, Martijn

    2017-04-01

    To examine the test-retest reliability and validity of ten activity trackers for step counting at three different walking speeds. Thirty-one healthy participants walked twice on a treadmill for 30 min while wearing 10 activity trackers (Polar Loop, Garmin Vivosmart, Fitbit Charge HR, Apple Watch Sport, Pebble Smartwatch, Samsung Gear S, Misfit Flash, Jawbone Up Move, Flyfit, and Moves). Participants walked three walking speeds for 10 min each; slow (3.2 km·h), average (4.8 km·h), and vigorous (6.4 km·h). To measure test-retest reliability, intraclass correlations (ICC) were determined between the first and second treadmill test. Validity was determined by comparing the trackers with the gold standard (hand counting), using mean differences, mean absolute percentage errors, and ICC. Statistical differences were calculated by paired-sample t tests, Wilcoxon signed-rank tests, and by constructing Bland-Altman plots. Test-retest reliability varied with ICC ranging from -0.02 to 0.97. Validity varied between trackers and different walking speeds with mean differences between the gold standard and activity trackers ranging from 0.0 to 26.4%. Most trackers showed relatively low ICC and broad limits of agreement of the Bland-Altman plots at the different speeds. For the slow walking speed, the Garmin Vivosmart and Fitbit Charge HR showed the most accurate results. The Garmin Vivosmart and Apple Watch Sport demonstrated the best accuracy at an average walking speed. For vigorous walking, the Apple Watch Sport, Pebble Smartwatch, and Samsung Gear S exhibited the most accurate results. Test-retest reliability and validity of activity trackers depends on walking speed. In general, consumer activity trackers perform better at an average and vigorous walking speed than at a slower walking speed.

  12. An enhanced reliability-oriented workforce planning model for process industry using combined fuzzy goal programming and differential evolution approach

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2018-03-01

    This paper draws on the "human reliability" concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice.

  13. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  14. The reliability of in-training assessment when performance improvement is taken into account.

    PubMed

    van Lohuizen, Mirjam T; Kuks, Jan B M; van Hell, Elisabeth A; Raat, A N; Stewart, Roy E; Cohen-Schotanus, Janke

    2010-12-01

    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.

  15. Alberta infant motor scale: reliability and validity when used on preterm infants in Taiwan.

    PubMed

    Jeng, S F; Yau, K I; Chen, L C; Hsiao, S F

    2000-02-01

    The goal of this study was to examine the reliability and validity of measurements obtained with the Alberta Infant Motor Scale (AIMS) for evaluation of preterm infants in Taiwan. Two independent groups of preterm infants were used to investigate the reliability (n=45) and validity (n=41) for the AIMS. In the reliability study, the AIMS was administered to the infants by a physical therapist, and infant performance was videotaped. The performance was then rescored by the same therapist and by 2 other therapists to examine the intrarater and interrater reliability. In the validity study, the AIMS and the Bayley Motor Scale were administered to the infants at 6 and 12 months of age to examine criterion-related validity. Intraclass correlation coefficients (ICCs) for intrarater and interrater reliability of measurements obtained with the AIMS were high (ICC=.97-.99). The AIMS scores correlated with the Bayley Motor Scale scores at 6 and 12 months (r=.78 and.90), although the AIMS scores at 6 months were only moderately predictive of the motor function at 12 months (r=.56). The results suggest that measurements obtained with the AIMS have acceptable reliability and concurrent validity but limited predictive value for evaluating preterm Taiwanese infants.

  16. Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System

    DTIC Science & Technology

    2010-09-13

    model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of

  17. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  18. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI

  19. Reliable pre-eclampsia pathways based on multiple independent microarray data sets.

    PubMed

    Kawasaki, Kaoru; Kondoh, Eiji; Chigusa, Yoshitsugu; Ujita, Mari; Murakami, Ryusuke; Mogami, Haruta; Brown, J B; Okuno, Yasushi; Konishi, Ikuo

    2015-02-01

    Pre-eclampsia is a multifactorial disorder characterized by heterogeneous clinical manifestations. Gene expression profiling of preeclamptic placenta have provided different and even opposite results, partly due to data compromised by various experimental artefacts. Here we aimed to identify reliable pre-eclampsia-specific pathways using multiple independent microarray data sets. Gene expression data of control and preeclamptic placentas were obtained from Gene Expression Omnibus. Single-sample gene-set enrichment analysis was performed to generate gene-set activation scores of 9707 pathways obtained from the Molecular Signatures Database. Candidate pathways were identified by t-test-based screening using data sets, GSE10588, GSE14722 and GSE25906. Additionally, recursive feature elimination was applied to arrive at a further reduced set of pathways. To assess the validity of the pre-eclampsia pathways, a statistically-validated protocol was executed using five data sets including two independent other validation data sets, GSE30186, GSE44711. Quantitative real-time PCR was performed for genes in a panel of potential pre-eclampsia pathways using placentas of 20 women with normal or severe preeclamptic singleton pregnancies (n = 10, respectively). A panel of ten pathways were found to discriminate women with pre-eclampsia from controls with high accuracy. Among these were pathways not previously associated with pre-eclampsia, such as the GABA receptor pathway, as well as pathways that have already been linked to pre-eclampsia, such as the glutathione and CDKN1C pathways. mRNA expression of GABRA3 (GABA receptor pathway), GCLC and GCLM (glutathione metabolic pathway), and CDKN1C was significantly reduced in the preeclamptic placentas. In conclusion, ten accurate and reliable pre-eclampsia pathways were identified based on multiple independent microarray data sets. A pathway-based classification may be a worthwhile approach to elucidate the pathogenesis of pre

  20. Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.

    PubMed

    Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle

    2017-12-12

    Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.

  1. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  2. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  3. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  4. Arm span and ulnar length are reliable and accurate estimates of recumbent length and height in a multiethnic population of infants and children under 6 years of age.

    PubMed

    Forman, Michele R; Zhu, Yeyi; Hernandez, Ladia M; Himes, John H; Dong, Yongquan; Danish, Robert K; James, Kyla E; Caulfield, Laura E; Kerver, Jean M; Arab, Lenore; Voss, Paula; Hale, Daniel E; Kanafani, Nadim; Hirschfeld, Steven

    2014-09-01

    Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R(2) = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R(2) = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R(2) = 0.87), ULR (R(2) = 0.85), and ULG (R(2) = 0.88) was less comparable with arm span (R(2) = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children. © 2014 American Society for Nutrition.

  5. Reliability of programs specified with equational specifications

    NASA Astrophysics Data System (ADS)

    Nikolik, Borislav

    Ultrareliability is desirable (and sometimes a demand of regulatory authorities) for safety-critical applications, such as commercial flight-control programs, medical applications, nuclear reactor control programs, etc. A method is proposed, called the Term Redundancy Method (TRM), for obtaining ultrareliable programs through specification-based testing. Current specification-based testing schemes need a prohibitively large number of testcases for estimating ultrareliability. They assume availability of an accurate program-usage distribution prior to testing, and they assume the availability of a test oracle. It is shown how to obtain ultrareliable programs (probability of failure near zero) with a practical number of testcases, without accurate usage distribution, and without a test oracle. TRM applies to the class of decision Abstract Data Type (ADT) programs specified with unconditional equational specifications. TRM is restricted to programs that do not exceed certain efficiency constraints in generating testcases. The effectiveness of TRM in failure detection and recovery is demonstrated on formulas from the aircraft collision avoidance system TCAS.

  6. Validity and reliability of a new ankle dorsiflexion measurement device.

    PubMed

    Gatt, Alfred; Chockalingam, Nachiappan

    2013-08-01

    The assessment of the maximum ankle dorsiflexion angle is an important clinical examination procedure. Evidence shows that the traditional goniometer is highly unreliable, and various designs of goniometers to measure the maximum ankle dorsiflexion angle rely on the application of a known force to obtain reliable results. Hence, an innovative ankle dorsiflexion measurement device was designed to make this measurement more reliable by holding the foot in a selected posture without the application of a known moment. To report on the comprehensive validity and reliability testing carried out on the new device. Following validity testing, four different trials to test reliability of the ankle dorsiflexion measurement device were performed. These trials included inter-rater and intra-rater testings with a controlled moment, intra-rater reliability testing with knees flexed and extended without a controlled moment, intra-rater testing with a patient population, and inter-rater reliability testing between four raters of varying experience without controlling moment. All raters were blinded. A series of trials to test intra-rater and inter-rater reliabilities. Intra-rater reliability intraclass correlation coefficient was 0.98 and inter-rater reliability intraclass correlation coefficient (2,1) was 0.953 with a controlled moment. With uncontrolled moment, very high reliability for intra-tester was also achieved (intraclass correlation coefficient = 0.94 with knees extended and intraclass correlation coefficient = 0.95 with knees flexed). For the trial investigating test-retest reliability with actual patients, intraclass correlation coefficient of 0.99 was obtained. In the trial investigating four different raters with uncontrolled moment, intraclass correlation coefficient of 0.91 was achieved. The new ankle dorsiflexion measurement device is a valid and reliable device for measuring ankle dorsiflexion in both healthy subjects and patients, with both controlled and

  7. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  8. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  9. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  10. Reproducibility of the measurement of central corneal thickness in healthy subjects obtained with the optical low coherence reflectometry pachymeter and comparison with the ultrasonic pachymetry.

    PubMed

    Garza-Leon, Manuel; Plancarte-Lozano, Eduardo; Valle-Penella, Agustín Del; Guzmán-Martínez, María de Lourdes; Villarreal-González, Andrés

    2018-01-01

    Corneal pachymetry is widely used for refractive surgery and follow up in keratoconus, accurate measurement is essential for a safe surgery. To assess intraobserver reliability of central corneal thickness (CCT) measurements using optical low-coherence reflectometry (OLCR) technology and its agreement with ultrasonic pachymeter (US). Randomized and prospective comparative evaluation of diagnostic technology. One randomly healthy eye of subjects was scanned three times with both devices. Intraobserver within-subject standard deviation (Sw), coefficient of variation (CVw) and intraclass correlation coefficient (ICC) were obtained for reliability analysis; for study agreement, data were analyzed using the paired-sample t test and the Bland-Altman LoA method. The mean of three scans of each equipment was used to assess the LoA. The study enrolled 30 eyes of 30 subjects with average age of 28.70 ± 8.06 years. For repeatability, the Sw were 3.41 and 5.96 µ, the intraobserver CVw was 2 and 4% and ICC 0.991 and 0.988, for OLCR and US respectively. The mean CCT difference between OLCR and US was 8.90 ± 9.03 µ (95% confidence interval: 5.52-2.27 µ), and the LoA was 35.40 µ. OLCR technology provided reliable intraobserver CCT measurements. Both pachymetry measurements may be used interchangeably with minimum calibration adjustment. Copyright: © 2018 Permanyer.

  11. Toward increased reliability in the electric power industry: direct temperature measurement in transformers using fiber optic sensors

    NASA Astrophysics Data System (ADS)

    McDonald, Greg

    1998-09-01

    Optimal loading, prevention of catastrophic failures and reduced maintenance costs are some of the benefits of accurate determination of hot spot winding temperatures in medium and high power transformers. Temperature estimates obtained using current theoretical models are not always accurate. Traditional technology (IR, thermocouples...) are unsuitable or inadequate for direct measurement. Nortech fiber-optic temperature sensors offer EMI immunity and chemical resistance and are a proven solution to the problem. The Nortech sensor's measurement principle is based on variations in the spectral absorption of a fiber-mounted semiconductor chip and probes are interchangeable with no need for recalibration. Total length of probe + extension can be up to several hundred meters allowing system electronics to be located in the control room or mounted in the transformer instrumentation cabinet. All of the sensor materials withstand temperatures up to 250 degree(s)C and have demonstrated excellent resistance to the harsh transformer environment (hot oil, kerosene). Thorough study of the problem and industry collaboration in testing and installation allows Nortech to identify and meet the need for durable probes, leak-proof feedthroughs, standard computer interfaces and measurement software. Refined probe technology, the method's simplicity and reliable calibration are all assets that should lead to growing acceptance of this type of direct measuring in the electric power industry.

  12. Extraction Optimization for Obtaining Artemisia capillaris Extract with High Anti-Inflammatory Activity in RAW 264.7 Macrophage Cells

    PubMed Central

    Jang, Mi; Jeong, Seung-Weon; Kim, Bum-Keun; Kim, Jong-Chan

    2015-01-01

    Plant extracts have been used as herbal medicines to treat a wide variety of human diseases. We used response surface methodology (RSM) to optimize the Artemisia capillaris Thunb. extraction parameters (extraction temperature, extraction time, and ethanol concentration) for obtaining an extract with high anti-inflammatory activity at the cellular level. The optimum ranges for the extraction parameters were predicted by superimposing 4-dimensional response surface plots of the lipopolysaccharide- (LPS-) induced PGE2 and NO production and by cytotoxicity of A. capillaris Thunb. extracts. The ranges of extraction conditions used for determining the optimal conditions were extraction temperatures of 57–65°C, ethanol concentrations of 45–57%, and extraction times of 5.5–6.8 h. On the basis of the results, a model with a central composite design was considered to be accurate and reliable for predicting the anti-inflammation activity of extracts at the cellular level. These approaches can provide a logical starting point for developing novel anti-inflammatory substances from natural products and will be helpful for the full utilization of A. capillaris Thunb. The crude extract obtained can be used in some A. capillaris Thunb.-related health care products. PMID:26075271

  13. Reliability of bounce drop jump parameters within elite male rugby players.

    PubMed

    Costley, Lisa; Wallace, Eric; Johnston, Michael; Kennedy, Rodney

    2017-07-25

    The aims of the study were to investigate the number of familiarisation sessions required to establish reliability of the bounce drop jump (BDJ) and subsequent reliability once familiarisation is achieved. Seventeen trained male athletes completed 4 BDJs in 4 separate testing sessions. Force-time data from a 20 cm BDJ was obtained using two force plates (ensuring ground contact < 250 ms). Subjects were instructed to 'jump for maximal height and minimal contact time' while the best and average of four jumps were compared. A series of performance variables were assessed in both eccentric and concentric phases including jump height, contact time, flight time, reactive strength index (RSI), peak power, rate of force development (RFD) and actual dropping height (ADH). Reliability was assessed using the intraclass correlation coefficient (ICC) and coefficient of variation (CV) while familiarisation was assessed using a repeated measures analysis of variance (ANOVA). The majority of DJ parameters exhibited excellent reliability with no systematic bias evident, while the average of 4 trials provided greater reliability. With the exception of vertical stiffness (CV: 12.0 %) and RFD (CV: 16.2 %) all variables demonstrated low within subject variation (CV range: 3.1 - 8.9 %). Relative reliability was very poor for ADH, with heights ranging from 14.87 - 29.85 cm. High levels of reliability can be obtained from the BDJ with the exception of vertical stiffness and RFD, however, extreme caution must be taken when comparing DJ results between individuals and squads due to large discrepancies between actual drop height and platform height.

  14. The reliability of the Australasian Triage Scale: a meta-analysis

    PubMed Central

    Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir

    2015-01-01

    BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538

  15. The reliability, accuracy and minimal detectable difference of a multi-segment kinematic model of the foot-shoe complex.

    PubMed

    Bishop, Chris; Paul, Gunther; Thewlis, Dominic

    2013-04-01

    Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    NASA Astrophysics Data System (ADS)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  17. Reference gene identification for reliable normalisation of quantitative RT-PCR data in Setaria viridis.

    PubMed

    Nguyen, Duc Quan; Eamens, Andrew L; Grof, Christopher P L

    2018-01-01

    Quantitative real-time polymerase chain reaction (RT-qPCR) is the key platform for the quantitative analysis of gene expression in a wide range of experimental systems and conditions. However, the accuracy and reproducibility of gene expression quantification via RT-qPCR is entirely dependent on the identification of reliable reference genes for data normalisation. Green foxtail ( Setaria viridis ) has recently been proposed as a potential experimental model for the study of C 4 photosynthesis and is closely related to many economically important crop species of the Panicoideae subfamily of grasses, including Zea mays (maize), Sorghum bicolor (sorghum) and Sacchurum officinarum (sugarcane). Setaria viridis (Accession 10) possesses a number of key traits as an experimental model, namely; (i) a small sized, sequenced and well annotated genome; (ii) short stature and generation time; (iii) prolific seed production, and; (iv) is amendable to Agrobacterium tumefaciens -mediated transformation. There is currently however, a lack of reference gene expression information for Setaria viridis ( S. viridis ). We therefore aimed to identify a cohort of suitable S. viridis reference genes for accurate and reliable normalisation of S. viridis RT-qPCR expression data. Eleven putative candidate reference genes were identified and examined across thirteen different S. viridis tissues. Of these, the geNorm and NormFinder analysis software identified SERINE / THERONINE - PROTEIN PHOSPHATASE 2A ( PP2A ), 5 '- ADENYLYLSULFATE REDUCTASE 6 ( ASPR6 ) and DUAL SPECIFICITY PHOSPHATASE ( DUSP ) as the most suitable combination of reference genes for the accurate and reliable normalisation of S. viridis RT-qPCR expression data. To demonstrate the suitability of the three selected reference genes, PP2A , ASPR6 and DUSP , were used to normalise the expression of CINNAMYL ALCOHOL DEHYDROGENASE ( CAD ) genes across the same tissues. This approach readily demonstrated the suitably of the three

  18. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  19. Reliability, Durability, and Safety | Transportation Research | NREL

    Science.gov Websites

    fill results obtained in different scenarios. The animation serves as a useful tool to help fleet limitations from a performance and reliability perspective. Evaluation results for three different BIMs analysis assists in development and helps end users select and deploy appropriate sensors for different

  20. Interrater reliability of the new criteria for behavioral variant frontotemporal dementia.

    PubMed

    Lamarre, Amanda K; Rascovsky, Katya; Bostrom, Alan; Toofanian, Parnian; Wilkins, Sarah; Sha, Sharon J; Perry, David C; Miller, Zachary A; Naasan, Georges; Laforce, Robert; Hagen, Jayne; Takada, Leonel T; Tartaglia, Maria Carmela; Kang, Gail; Galasko, Douglas; Salmon, David P; Farias, Sarah Tomaszewski; Kaur, Berneet; Olichney, John M; Quitania Park, Lovingly; Mendez, Mario F; Tsai, Po-Heng; Teng, Edmond; Dickerson, Bradford Clark; Domoto-Reilly, Kimiko; McGinnis, Scott; Miller, Bruce L; Kramer, Joel H

    2013-05-21

    To evaluate the interrater reliability of the new International Behavioural Variant FTD Criteria Consortium (FTDC) criteria for behavioral variant frontotemporal dementia (bvFTD). Twenty standardized clinical case modules were developed for patients with a range of neurodegenerative diagnoses, including bvFTD, primary progressive aphasia (nonfluent, semantic, and logopenic variant), Alzheimer disease, and Lewy body dementia. Eighteen blinded raters reviewed the modules and 1) rated the presence or absence of core diagnostic features for the FTDC criteria, and 2) provided an overall diagnostic rating. Interrater reliability was determined by κ statistics for multiple raters with categorical ratings. The mean κ value for diagnostic agreement was 0.81 for possible bvFTD and 0.82 for probable bvFTD ("almost perfect agreement"). Interrater reliability for 4 of the 6 core features had "substantial" agreement (behavioral disinhibition, perseverative/compulsive, sympathy/empathy, hyperorality; κ = 0.61-0.80), whereas 2 had "moderate" agreement (apathy/inertia, neuropsychological; κ = 0.41-0.6). Clinician years of experience did not significantly influence rater accuracy. The FTDC criteria show promise for improving the diagnostic accuracy and reliability of clinicians and researchers. As disease-altering therapies are developed, accurate differential diagnosis between bvFTD and other neurodegenerative diseases will become increasingly important.

  1. Reliability assessment and improvement for a fast corrector power supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Bin; Liu, Chen-Yao; Wang, Bao-Sheng; Wong, Yong Seng

    2018-07-01

    Fast Orbit Feedback System (FOFB) can be installed in a synchrotron light source to eliminate undesired disturbances and to improve the stability of beam orbit. The design and implementation of an accurate and reliable Fast Corrector Power Supply (FCPS) is essential to realize the effectiveness and availability of the FOFB. A reliability assessment for the FCPSs in the FOFB of Taiwan Photon Source (TPS) considering MOSFETs' temperatures is represented in this paper. The FCPS is composed of a full-bridge topology and a low-pass filter. A Hybrid Pulse Width Modulation (HPWM) requiring two MOSFETs in the full-bridge circuit to be operated at high frequency and the other two be operated at the output frequency is adopted to control the implemented FCPS. Due the characteristic of HPWM, the conduction loss and switching loss of each MOSFET in the FCPS is not same. Two of the MOSFETs in the full-bridge circuit will suffer higher temperatures and therefore the circuit reliability of FCPS is reduced. A Modified PWM Scheme (MPWMS) designed to average MOSFETs' temperatures and to improve circuit reliability is proposed in this paper. Experimental results measure the MOSFETs' temperatures of FCPS controlled by the HPWM and the proposed MPWMS. The reliability indices under different PWM controls are then assessed. From the experimental results, it can be observed that the reliability of FCPS using the proposed MPWMS can be improved because the MOSFETs' temperatures are closer. Since the reliability of FCPS can be enhanced, the availability of FOFB can also be improved.

  2. Contaminants in landfill soils - Reliability of prefeasibility studies.

    PubMed

    Hölzle, Ingo

    2017-05-01

    Recent landfill mining studies have researched the potential for resource recovery using samples from core drilling or grab cranes. However, most studies used small sample numbers, which may not represent the heterogeneous landfill composition. As a consequence, there exists a high risk of an incorrect economic and/or ecological evaluation. The main objective of this work is to investigate the possibilities and limitations of preliminary investigations concerning the crucial soil composition. The preliminary samples of landfill investigations were compared to the excavation samples from three completely excavated landfills in Germany. In addition, the research compared the reliability of prediction of the two investigation methods, core drilling and grab crane. Sampling using a grab crane led to better results, even for smaller investigations of 10 samples. Analyses of both methods showed sufficiently accurate results to make predictions (standard error 5%, level of confidence 95%) for most heavy metals, cyanide and PAH in the dry substance and for sulphate, barium, Benzo[a]pyrene, pH and the electrical conductivity in leachate analyses of soil type waste. While chrome and nickel showed less accurate results, the concentrations of hydrocarbons, TOC, DOC, PCB and fluorine (leachate) were not predictable even for sample numbers of up to 59. Overestimations of pollutant concentrations were more frequently apparent in drilling, and underestimations when using a grab crane. The dispersion of the element and elemental composition had no direct impact on the reliability of prediction. Thus, an individual consideration of the particular element or elemental composition for dry substance and leachate analyses is recommended to adapt the sample strategy and calculate an optimum sample number. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Measurement of body temperature in adult patients: comparative study of accuracy, reliability and validity of different devices.

    PubMed

    Rubia-Rubia, J; Arias, A; Sierra, A; Aguirre-Jaime, A

    2011-07-01

    We compared a range of alternative devices with core body temperature measured at the pulmonary artery to identify the most valid and reliable instrument for measuring temperature in routine conditions in health services. 201 patients from the intensive care unit of the Candelaria University Hospital, Canary Islands, admitted to hospital between April 2006 and July 2007. All patients (or their families) gave informed consent. Readings from gallium-in-glass, reactive strip and digital in axilla, infra-red ear and frontal thermometers were compared with the pulmonary artery core temperature simultaneously. External factors suspected of having an influence on the differences were explored. The cut-off point readings for each thermometer were fixed for the maximum negative predictive value in comparison with the core temperature. The validity, reliability, accuracy, external influence, the waste they generated, ease of use, speed, durability, security, comfort and cost of each thermometer was evaluated. An ad hoc overall valuation score was obtained from these parameters for each instrument. For an error of ± 0.2°C and concordance with respect to fever, the gallium-in-glass thermometer gave the best results. The largest area under the receiver operating characteristic (ROC) curve is obtained by the digital axillar thermometer with probe (0.988 ± 0.007). The minimum difference between readings was given by the infrared ear thermometer, in comparison with the core temperature (-0.1 ± 0.3°C). Age, weight, level of conscience, male sex, environmental temperature and vaso-constrictor medication increases the difference in the readings and fever treatment reduces it, although this is not the same for all thermometers. The compact digital axillar thermometer and the digital thermometer with probe obtained the highest overall valuation score. If we only evaluate the aspects of validity, reliability, accuracy and external influence, the best thermometer would be the

  4. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    PubMed

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  5. Differences in Reliability of Reproductive History Recall among Women in North Africa

    ERIC Educational Resources Information Center

    Soliman, Amr; Allen, Katharine; Lo, An-Chi; Banerjee, Mousumi; Hablas, Ahmed; Benider, Abdellatif; Benchekroun, Nadya; Samir, Salwa; Omar, Hoda G.; Merajver, Sofia; Mullan, Patricia

    2009-01-01

    Breast cancer is the most common cancer among women in North Africa. Women in this region have unique reproductive profiles. It is essential to obtain reliable information on reproductive histories to help better understand the relationship between reductive health and breast cancer. We tested the reliability of a reproductive history-based…

  6. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  7. Effective scheme to determine accurate defect formation energies and charge transition levels of point defects in semiconductors

    NASA Astrophysics Data System (ADS)

    Yao, Cang Lang; Li, Jian Chen; Gao, Wang; Tkatchenko, Alexandre; Jiang, Qing

    2017-12-01

    We propose an effective method to accurately determine the defect formation energy Ef and charge transition level ɛ of the point defects using exclusively cohesive energy Ecoh and the fundamental band gap Eg of pristine host materials. We find that Ef of the point defects can be effectively separated into geometric and electronic contributions with a functional form: Ef=χ Ecoh+λ Eg , where χ and λ are dictated by the geometric and electronic factors of the point defects (χ and λ are defect dependent). Such a linear combination of Ecoh and Eg reproduces Ef with an accuracy better than 5% for electronic structure methods ranging from hybrid density-functional theory (DFT) to many-body random-phase approximation (RPA) and experiments. Accordingly, ɛ is also determined by Ecoh/Eg and the defect geometric/electronic factors. The identified correlation is rather general for monovacancies and interstitials, which holds in a wide variety of semiconductors covering Si, Ge, phosphorenes, ZnO, GaAs, and InP, and enables one to obtain reliable values of Ef and ɛ of the point defects for RPA and experiments based on semilocal DFT calculations.

  8. Reliability of rehabilitative ultrasonographic imaging for muscle thickness measurement of the rhomboid major.

    PubMed

    Jeong, Ju Ri; Ko, Young Jun; Ha, Hyun Geun; Lee, Wan Hee

    2016-03-01

    This study was to establish inter-rater and intrarater reliability of the rehabilitative ultrasonographic imaging (RUSI) technique for muscle thickness measurement of the rhomboid major at rest and with the shoulder abducted to 90°. Twenty-four young adults (eight men, 16 women; right-handed; mean age [±SD], 24·4 years [±2·6]) with no history of neck, shoulder, or arm pain were recruited. Rhomboid major muscle images were obtained in the resting position and with shoulder in 90° abduction using an ultrasonography system with a 7·5-MHz linear transducer. In these two positions, the examiners found the site at which the transducer could be placed. Two examiners obtained the images of all participants in three test sessions at random. Intraclass correlation coefficients (ICC) were used to estimate reliability. All ICCs (95% CI) were >0·75, ranging from 0·93 to 0·98, which indicates good reliability. The ICCs for inter-rater reliability ranged from 0·75 to 0·94. For the absolute value of the difference in the intra-examiner reliability between the right and left ratios, the ICCs ranged from 0·58 to 0·91. In this study, the intra- and interexaminer reliability of muscle thickness measurements of the rhomboid major were good. Therefore, we suggest that muscle thickness measurements of the rhomboid major obtained with the RUSI technique would be useful for clinical rehabilitative assessment. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  9. The TiltMeter app is a novel and accurate measurement tool for the weight bearing lunge test.

    PubMed

    Williams, Cylie M; Caserta, Antoni J; Haines, Terry P

    2013-09-01

    The weight bearing lunge test is increasing being used by health care clinicians who treat lower limb and foot pathology. This measure is commonly established accurately and reliably with the use of expensive equipment. This study aims to compare the digital inclinometer with a free app, TiltMeter on an Apple iPhone. This was an intra-rater and inter-rater reliability study. Two raters (novice and experienced) conducted the measurements in both a bent knee and straight leg position to determine the intra-rater and inter-rater reliability. Concurrent validity was also established. Allied health practitioners were recruited as participants from the workplace. A preconditioning stretch was conducted and the ankle range of motion was established with the weight bearing lunge test position with firstly the leg straight and secondly with the knee bent. The measurement device and each participant were randomised during measurement. The intra-rater reliability and inter-rater reliability for the devices and in both positions were all over ICC 0.8 except for one intra-rater measure (Digital inclinometer, novice, ICC 0.65). The inter-rater reliability between the digital inclinometer and the tilmeter was near perfect, ICC 0.96 (CI: 0.898-0.983); Concurrent validity ICC between the two devices was 0.83 (CI: -0.740 to 0.445). The use of the Tiltmeter app on the iPhone is a reliable and inexpensive tool to measure the available ankle range of motion. Health practitioners should use caution in applying these findings to other smart phone equipment if surface areas are not comparable. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  11. Reliability in the DSM-III field trials: interview v case summary.

    PubMed

    Hyler, S E; Williams, J B; Spitzer, R L

    1982-11-01

    A study compared the reliability of psychiatric diagnoses obtained from the live interviews and from case summaries, on the same patients, by the same clinicians, using the same DSM-III diagnostic criteria. The results showed that the reliability of the major diagnostic classes of DSM-III was higher when diagnoses were made from live interviews than when they were made from case summaries. We conclude that diagnoses based on information contained in traditionally prepared case summaries may lead to an underestimation of the reliability of diagnoses made based on information collected during a "live" interview.

  12. The Reliability and Legality of Online Education

    ERIC Educational Resources Information Center

    Agbebaku, C. A.; Adavbiele, A. Justina

    2016-01-01

    Today, the classroom beyond the border through online Open University education in Nigeria has made it possible for many students to obtain university degrees. However, the reliability and legality of such degrees have become questionable. This paper is a descriptive exploratory case study regarding the public and private sector end-users, whose…

  13. Disposable collection kit for rapid and reliable collection of saliva.

    PubMed

    Yamaguchi, Masaki; Tezuka, Yuki; Takeda, Kazunori; Shetty, Vivek

    2015-01-01

    To describe and evaluate disposable saliva collection kit for rapid, reliable, and reproducible collection of saliva samples. The saliva collection kit comprised of a saliva absorbent swab and an extractor unit was used to retrieve whole saliva samples from 10 subjects. The accuracy and precision of the extracted volumes (3, 10, and 30 μl) were compared to similar volumes drawn from control samples obtained by passive drool. Additionally, the impact of kit collection method on subsequent immunoassay results was verified by assessing salivary cortisol levels in the samples and comparing them to controls. The recovered volumes for the whole saliva samples were 3.85 ± 0.28, 10.79 ± 0.95, and 31.18 ± 1.72 μl, respectively (CV = 8.76%) and 2.91 ± 0.19, 9.75 ± 0.43, and 29.64 ± 0.91 μl, respectively, (CV = 6.36%) for the controls. There was a close correspondence between the salivary cortisol levels from the saliva samples obtained by the collection kit and the controls (R(2)  > 0.96). The disposable saliva collection kit allows accurate and repeatable collection of fixed amounts of whole saliva and does not interfere with subsequent measurements of salivary cortisol. The simple collection process, lack of elaborate specimen recovery steps, and the short turnaround time (<3 min) should render the kit attractive to test subjects and researchers alike. © 2015 Wiley Periodicals, Inc.

  14. Disposable Collection Kit for Rapid and Reliable Collection of Saliva

    PubMed Central

    Yamaguchi, Masaki; Tezuka, Yuki; Takeda, Kazunori; Shetty, Vivek

    2015-01-01

    Objectives To describe and evaluate disposable saliva collection kit for rapid, reliable, and reproducible collection of saliva samples. Methods The saliva collection kit comprised of a saliva absorbent swab and an extractor unit was used to retrieve whole saliva samples from 10 subjects. The accuracy and precision of the extracted volumes (3, 10, and 30 μl) were compared to similar volumes drawn from control samples obtained by passive drool. Additionally, the impact of kit collection method on subsequent immunoassay results was verified by assessing salivary cortisol levels in the samples and comparing them to controls. Results The recovered volumes for the whole saliva samples were 3.85 ± 0.28, 10.79 ± 0.95, and 31.18 ± 1.72 μl, respectively (CV = 8.76%) and 2.91 ± 0.19, 9.75 ± 0.43, and 29.64 ± 0.91 μl, respectively, (CV = 6.36%) for the controls. There was a close correspondence between the salivary cortisol levels from the saliva samples obtained by the collection kit and the controls (R2 > 0.96). Conclusions The disposable saliva collection kit allows accurate and repeatable collection of fixed amounts of whole saliva and does not interfere with subsequent measurements of salivary cortisol. The simple collection process, lack of elaborate specimen recovery steps, and the short turnaround time (<3 min) should render the kit attractive to test subjects and researchers alike. Am. J. Hum. Biol. 27:720–723, 2015. © 2015 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc. PMID:25754371

  15. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    PubMed

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  16. An accurate and rapid radiographic method of determining total lung capacity

    PubMed Central

    Reger, R. B.; Young, A.; Morgan, W. K. C.

    1972-01-01

    The accuracy and reliability of Barnhard's radiographic method of determining total lung capacity have been confirmed by several groups of investigators. Despite its simplicity and general reliability, it has several shortcomings, especially when used in large-scale epidemiological surveys. Of these, the most serious is related to film technique; thus, when the cardiac and diaphragmatic shadows are poorly defined, the appropriate measurements cannot be made accurately. A further drawback involves the time needed to measure the segments and to perform the necessary calculations. We therefore set out to develop an abbreviated and simpler radiographic method for determining total lung capacity. This uses a step-wise multiple regression model which allows total lung capacity to be derived as follows: posteroanterior and lateral films are divided into the standard sections as described in the text, the width, depth, and height of sections 1 and 4 are measured in centimetres, finally the necessary derivations and substitutions are made and applied to the formula Ŷ = −1·41148 + (0·00479 X1) + (0·00097 X4), where Ŷ is the total lung capacity. In our hands this method has provided a simple, rapid, and acceptable method of determining total lung capacity. PMID:5034594

  17. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  18. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  20. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    PubMed

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  1. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  2. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    PubMed

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  3. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware

    PubMed Central

    Municio, Esteban; Van de Velde, Bruno; Latré, Steven

    2018-01-01

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks. PMID:29393900

  4. Reliable aerial thermography for energy conservation

    NASA Technical Reports Server (NTRS)

    Jack, J. R.; Bowman, R. L.

    1981-01-01

    A method for energy conservation, the aerial thermography survey, is discussed. It locates sources of energy losses and wasteful energy management practices. An operational map is presented for clear sky conditions. The map outlines the key environmental conditions conductive to obtaining reliable aerial thermography. The map is developed from defined visual and heat loss discrimination criteria which are quantized based on flat roof heat transfer calculations.

  5. The reliability of dietary and lifestyle information obtained from spouses in an elderly chinese population.

    PubMed

    Liang, Wenbin; Binns, Colin; Lee, Andy H; Huang, Rongsheng; Hu, Delong

    2008-01-01

    In many health studies of the elderly population, the subjects have cognitive or linguistic impairments, so data need to be collected from surrogates. This study compares dietary and lifestyle information reported by elderly Chinese with those provided by their spouses. Community couples 60 years and older were recruited to participate in an interview. One person from each couple was randomly chosen as the index person. Characteristics concerning the index person were then solicited from that person and separately from his or her spouse using validated questionnaires. For the 128 food items considered, the mean kappa was 0.73 for both frequency (SD 0.18) and amount (SD 0.22) of intake, and more than 70% of the couples had kappa statistics exceeding 0.61. Food items exhibiting high agreement between the spouses include rice, apples, tomatoes, and pork chops. The proportion of perfect agreement was higher than 80% for physical activity, smoking, and tea drinking behaviors. In conclusion, the spouse can serve as a proxy to provide reliable information when his or her partner is unavailable.

  6. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    PubMed

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  7. Bi-Factor Multidimensional Item Response Theory Modeling for Subscores Estimation, Reliability, and Classification

    ERIC Educational Resources Information Center

    Md Desa, Zairul Nor Deana

    2012-01-01

    In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…

  8. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  9. Calculating accurate aboveground dry weight biomass of herbaceous vegetation in the Great Plains: A comparison of three calculations to determine the least resource intensive and most accurate method

    Treesearch

    Ben Butler

    2007-01-01

    Obtaining accurate biomass measurements is often a resource-intensive task. Data collection crews often spend large amounts of time in the field clipping, drying, and weighing grasses to calculate the biomass of a given vegetation type. Such a problem is currently occurring in the Great Plains region of the Bureau of Indian Affairs. A study looked at six reservations...

  10. Obtaining soil hydraulic parameters from data assimilation under different climatic/soil conditions

    USDA-ARS?s Scientific Manuscript database

    Obtaining reliable soil hydraulic properties is essential to correctly simulating soil water content (SWC), which is a key component of countless applications such as agricultural management, soil remediation, aquifer protection, etc. Soil hydraulic properties can be measured in the laboratory; howe...

  11. Accurate secondary structure prediction and fold recognition for circular dichroism spectroscopy

    PubMed Central

    Micsonai, András; Wien, Frank; Kernya, Linda; Lee, Young-Ho; Goto, Yuji; Réfrégiers, Matthieu; Kardos, József

    2015-01-01

    Circular dichroism (CD) spectroscopy is a widely used technique for the study of protein structure. Numerous algorithms have been developed for the estimation of the secondary structure composition from the CD spectra. These methods often fail to provide acceptable results on α/β-mixed or β-structure–rich proteins. The problem arises from the spectral diversity of β-structures, which has hitherto been considered as an intrinsic limitation of the technique. The predictions are less reliable for proteins of unusual β-structures such as membrane proteins, protein aggregates, and amyloid fibrils. Here, we show that the parallel/antiparallel orientation and the twisting of the β-sheets account for the observed spectral diversity. We have developed a method called β-structure selection (BeStSel) for the secondary structure estimation that takes into account the twist of β-structures. This method can reliably distinguish parallel and antiparallel β-sheets and accurately estimates the secondary structure for a broad range of proteins. Moreover, the secondary structure components applied by the method are characteristic to the protein fold, and thus the fold can be predicted to the level of topology in the CATH classification from a single CD spectrum. By constructing a web server, we offer a general tool for a quick and reliable structure analysis using conventional CD or synchrotron radiation CD (SRCD) spectroscopy for the protein science research community. The method is especially useful when X-ray or NMR techniques fail. Using BeStSel on data collected by SRCD spectroscopy, we investigated the structure of amyloid fibrils of various disease-related proteins and peptides. PMID:26038575

  12. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  13. Study of complete interconnect reliability for a GaAs MMIC power amplifier

    NASA Astrophysics Data System (ADS)

    Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao

    2018-05-01

    By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.

  14. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and

  15. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  16. The reliability, validity, and accuracy of self-reported absenteeism from work: a meta-analysis.

    PubMed

    Johns, Gary; Miraglia, Mariella

    2015-01-01

    Because of a variety of access limitations, self-reported absenteeism from work is often employed in research concerning health, organizational behavior, and economics, and it is ubiquitous in large scale population surveys in these domains. Several well established cognitive and social-motivational biases suggest that self-reports of absence will exhibit convergent validity with records-based measures but that people will tend to underreport the behavior. We used meta-analysis to summarize the reliability, validity, and accuracy of absence self-reports. The results suggested that self-reports of absenteeism offer adequate test-retest reliability and that they exhibit reasonably good rank order convergence with organizational records. However, people have a decided tendency to underreport their absenteeism, although such underreporting has decreased over time. Also, self-reports were more accurate when sickness absence rather than absence for any reason was probed. It is concluded that self-reported absenteeism might serve as a valid measure in some correlational research designs. However, when accurate knowledge of absolute absenteeism levels is essential, the tendency to underreport could result in flawed policy decisions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  18. Reliability and Validity of the Load-Velocity Relationship to Predict the 1RM Back Squat.

    PubMed

    Banyard, Harry G; Nosaka, Kazunori; Haff, G Gregory

    2017-07-01

    Banyard, HG, Nosaka, K, and Haff, GG. Reliability and validity of the load-velocity relationship to predict the 1RM back squat. J Strength Cond Res 31(7): 1897-1904, 2017-This study investigated the reliability and validity of the load-velocity relationship to predict the free-weight back squat one repetition maximum (1RM). Seventeen strength-trained males performed three 1RM assessments on 3 separate days. All repetitions were performed to full depth with maximal concentric effort. Predicted 1RMs were calculated by entering the mean concentric velocity of the 1RM (V1RM) into an individualized linear regression equation, which was derived from the load-velocity relationship of 3 (20, 40, 60% of 1RM), 4 (20, 40, 60, 80% of 1RM), or 5 (20, 40, 60, 80, 90% of 1RM) incremental warm-up sets. The actual 1RM (140.3 ± 27.2 kg) was very stable between 3 trials (ICC = 0.99; SEM = 2.9 kg; CV = 2.1%; ES = 0.11). Predicted 1RM from 5 warm-up sets up to and including 90% of 1RM was the most reliable (ICC = 0.92; SEM = 8.6 kg; CV = 5.7%; ES = -0.02) and valid (r = 0.93; SEE = 10.6 kg; CV = 7.4%; ES = 0.71) of the predicted 1RM methods. However, all predicted 1RMs were significantly different (p ≤ 0.05; ES = 0.71-1.04) from the actual 1RM. Individual variation for the actual 1RM was small between trials ranging from -5.6 to 4.8% compared with the most accurate predictive method up to 90% of 1RM, which was more variable (-5.5 to 27.8%). Importantly, the V1RM (0.24 ± 0.06 m·s) was unreliable between trials (ICC = 0.42; SEM = 0.05 m·s; CV = 22.5%; ES = 0.14). The load-velocity relationship for the full depth free-weight back squat showed moderate reliability and validity but could not accurately predict 1RM, which was stable between trials. Thus, the load-velocity relationship 1RM prediction method used in this study cannot accurately modify sessional training loads because of large V1RM variability.

  19. Are YouTube videos accurate and reliable on basic life support and cardiopulmonary resuscitation?

    PubMed

    Yaylaci, Serpil; Serinken, Mustafa; Eken, Cenker; Karcioglu, Ozgur; Yilmaz, Atakan; Elicabuk, Hayri; Dal, Onur

    2014-10-01

    The objective of this study is to investigate reliability and accuracy of the information on YouTube videos related to CPR and BLS in accord with 2010 CPR guidelines. YouTube was queried using four search terms 'CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support' between 2011 and 2013. Sources that uploaded the videos, the record time, the number of viewers in the study period, inclusion of human or manikins were recorded. The videos were rated if they displayed the correct order of resuscitative efforts in full accord with 2010 CPR guidelines or not. Two hundred and nine videos meeting the inclusion criteria after the search in YouTube with four search terms ('CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support') comprised the study sample subjected to the analysis. Median score of the videos is 5 (IQR: 3.5-6). Only 11.5% (n = 24) of the videos were found to be compatible with 2010 CPR guidelines with regard to sequence of interventions. Videos uploaded by 'Guideline bodies' had significantly higher rates of download when compared with the videos uploaded by other sources. Sources of the videos and date of upload (year) were not shown to have any significant effect on the scores received (P = 0.615 and 0.513, respectively). The videos' number of downloads did not differ according to the videos compatible with the guidelines (P = 0.832). The videos downloaded more than 10,000 times had a higher score than the others (P = 0.001). The majority of You-Tube video clips purporting to be about CPR are not relevant educational material. Of those that are focused on teaching CPR, only a small minority optimally meet the 2010 Resucitation Guidelines. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  20. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  1. Kinetic determinations of accurate relative oxidation potentials of amines with reactive radical cations.

    PubMed

    Gould, Ian R; Wosinska, Zofia M; Farid, Samir

    2006-01-01

    Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.

  2. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  3. Improved electrode paste provides reliable measurement of galvanic skin response

    NASA Technical Reports Server (NTRS)

    Day, J. L.

    1966-01-01

    High-conductivity electrode paste is used in obtaining accurate skin resistance or skin potential measurements. The paste is isotonic to perspiration, is nonirritating and nonsensitizing, and has an extended shelf life.

  4. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  5. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  6. Neurology objective structured clinical examination reliability using generalizability theory.

    PubMed

    Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R

    2015-11-03

    This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.

  7. Reliability, precision, and gender differences in knee internal/external rotation proprioception measurements.

    PubMed

    Nagai, Takashi; Sell, Timothy C; Abt, John P; Lephart, Scott M

    2012-11-01

    To develop and assess the reliability and precision of knee internal/external rotation (IR/ER) threshold to detect passive motion (TTDPM) and determine if gender differences exist. Test-retest for the reliability/precision and cross-sectional for gender comparisons. University neuromuscular and human performance research laboratory. Ten subjects for the reliability and precision aim. Twenty subjects (10 males and 10 females) for gender comparisons. All TTDPM tests were performed using a multi-mode dynamometer. Subjects performed TTDPM at two knee positions (near IR or ER end-range). Intraclass correlation coefficient (ICC (3,k)) and standard error of measurement (SEM) were used to evaluate the reliability and precision. Independent t-tests were used to compare genders. TTDPM toward IR and ER at two knee positions. Intrasession and intersession reliability and precision were good (ICC=0.68-0.86; SEM=0.22°-0.37°). Females had significantly diminished TTDPM toward IR at IR-test position (males: 0.77°±0.14°, females: 1.18°±0.46°, p=0.021) and TTDPM toward IR at the ER-test position (males: 0.87°±0.13°, females: 1.36°±0.58°, p=0.026). No other significant gender differences were found (p>0.05). The current IR/ER TTDPM methods are reliable and accurate for the test-retest or cross-section research design. Gender differences were found toward IR where the ACL acts as the secondary restraint. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  9. Reliability model of disk arrays RAID-5 with data striping

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.; D'K Novikova Freyre Shavier, G.

    2018-03-01

    Within the scope of the this scientific paper, the simplified reliability model of disk arrays RAID-5 (redundant arrays of inexpensive disks) and an advanced reliability model offered by the authors taking into the consideration nonzero time of the faulty disk replacement and different failure rates of disks in normal state of the disk array and in degraded and rebuild states are discussed. The formula obtained by the authors for calculation of the mean time to data loss (MTTDL) of the RAID-5 disk arrays on basis of the advanced model is also presented. Finally, the technique of estimation of the initial reliability parameters, which are used in the reliability model, and the calculation examples of the mean time to data loss of the RAID-5 disk arrays for the different number of disks are also given.

  10. Are general surgeons able to accurately self-assess their level of technical skills?

    PubMed

    Rizan, C; Ansell, J; Tilston, T W; Warren, N; Torkington, J

    2015-11-01

    Self-assessment is a way of improving technical capabilities without the need for trainer feedback. It can identify areas for improvement and promote professional medical development. The aim of this review was to identify whether self-assessment is an accurate form of technical skills appraisal in general surgery. The PubMed, MEDLINE(®), Embase(™) and Cochrane databases were searched for studies assessing the reliability of self-assessment of technical skills in general surgery. For each study, we recorded the skills assessed and the evaluation methods used. Common endpoints between studies were compared to provide recommendations based on the levels of evidence. Twelve studies met the inclusion criteria from 22,292 initial papers. There was no level 1 evidence published. All papers compared the correlation between self-appraisal versus an expert score but differed in the technical skills assessment and the evaluation tools used. The accuracy of self-assessment improved with increasing experience (level 2 recommendation), age (level 3 recommendation) and the use of video playback (level 3 recommendation). Accuracy was reduced by stressful learning environments (level 2 recommendation), lack of familiarity with assessment tools (level 3 recommendation) and in advanced surgical procedures (level 3 recommendation). Evidence exists to support the reliability of self-assessment of technical skills in general surgery. Several variables have been shown to affect the accuracy of self-assessment of technical skills. Future work should focus on evaluating the reliability of self-assessment during live operating procedures.

  11. Validation of Observations Obtained with a Liquid Mirror Telescope by Comparison with Sloan Digital Sky Survey Observations

    NASA Astrophysics Data System (ADS)

    Borra, E. F.

    2015-06-01

    The results of a search for peculiar astronomical objects using very low resolution spectra obtained with the NASA Orbital Debris Observatory (NODO) 3 m diameter liquid mirror telescope (LMT) are compared with results of spectra obtained with the Sloan Digital Sky Survey (SDSS). The main purpose of this comparison is to verify whether observations taken with this novel type of telescope are reliable. This comparison is important because LMTs are an inexpensive novel type of telescope that is very useful for astronomical surveys, particularly surveys in the time domain, and validation of the data taken with an LMT by comparison with data from a classical telescope will validate their reliability. We start from a published data analysis that classified as peculiar only 206 of the 18,000 astronomical objects observed with the NODO LMT. A total of 29 of these 206 objects were found in the SDSS. The reliability of the NODO data can be seen through the results of the detailed analysis that, in practice, incorrectly identified less than 0.3% of the 18,000 spectra as peculiar objects, most likely because they are variable stars. We conclude that the LMT gave reliable observations, comparable to those that would have been obtained with a telescope using a glass mirror.

  12. Validation of reference genes aiming accurate normalization of qRT-PCR data in Dendrocalamus latiflorus Munro.

    PubMed

    Liu, Mingying; Jiang, Jing; Han, Xiaojiao; Qiao, Guirong; Zhuo, Renying

    2014-01-01

    Dendrocalamus latiflorus Munro distributes widely in subtropical areas and plays vital roles as valuable natural resources. The transcriptome sequencing for D. latiflorus Munro has been performed and numerous genes especially those predicted to be unique to D. latiflorus Munro were revealed. qRT-PCR has become a feasible approach to uncover gene expression profiling, and the accuracy and reliability of the results obtained depends upon the proper selection of stable reference genes for accurate normalization. Therefore, a set of suitable internal controls should be validated for D. latiflorus Munro. In this report, twelve candidate reference genes were selected and the assessment of gene expression stability was performed in ten tissue samples and four leaf samples from seedlings and anther-regenerated plants of different ploidy. The PCR amplification efficiency was estimated, and the candidate genes were ranked according to their expression stability using three software packages: geNorm, NormFinder and Bestkeeper. GAPDH and EF1α were characterized to be the most stable genes among different tissues or in all the sample pools, while CYP showed low expression stability. RPL3 had the optimal performance among four leaf samples. The application of verified reference genes was illustrated by analyzing ferritin and laccase expression profiles among different experimental sets. The analysis revealed the biological variation in ferritin and laccase transcript expression among the tissues studied and the individual plants. geNorm, NormFinder, and BestKeeper analyses recommended different suitable reference gene(s) for normalization according to the experimental sets. GAPDH and EF1α had the highest expression stability across different tissues and RPL3 for the other sample set. This study emphasizes the importance of validating superior reference genes for qRT-PCR analysis to accurately normalize gene expression of D. latiflorus Munro.

  13. A reliability analysis framework with Monte Carlo simulation for weld structure of crane's beam

    NASA Astrophysics Data System (ADS)

    Wang, Kefei; Xu, Hongwei; Qu, Fuzheng; Wang, Xin; Shi, Yanjun

    2018-04-01

    The reliability of the crane product in engineering is the core competitiveness of the product. This paper used Monte Carlo method analyzed the reliability of the weld metal structure of the bridge crane whose limit state function is mathematical expression. Then we obtained the minimum reliable welding feet height value for the welds between cover plate and web plate on main beam in different coefficients of variation. This paper provides a new idea and reference for the growth of the inherent reliability of crane.

  14. Comparison of the identification results of Candida species obtained by BD Phoenix™ and Maldi-TOF (Bruker Microflex LT Biotyper 3.1).

    PubMed

    Marucco, Andrea P; Minervini, Patricia; Snitman, Gabriela V; Sorge, Adriana; Guelfand, Liliana I; Moral, Laura López

    2018-02-05

    In patients with invasive fungal infections, the accurate and rapid identification of the genus Candida is of utmost importance since antimycotic sensitivity is closely related to the species. The aim of the present study was to compare the identification results of species of the genus Candida obtained by BD Phoenix™ (Becton Dickinson [BD]) and Maldi-TOF MS (Bruker Microflex LT Biotyper 3.1). A total of 192 isolates from the strain collection belonging to the Mycology Network of the Autonomous City of Buenos Aires, Argentina, were analyzed. The observed concordance was 95%. Only 10 strains (5%) were not correctly identified by the BD Phoenix™ system. The average identification time with the Yeast ID panels was 8h 22min. The BD Phoenix™ system proved to be a simple, reliable and effective method for identifying the main species of the genus Candida. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Web-Based Assessment of Mental Well-Being in Early Adolescence: A Reliability Study.

    PubMed

    Hamann, Christoph; Schultze-Lutter, Frauke; Tarokh, Leila

    2016-06-15

    The ever-increasing use of the Internet among adolescents represents an emerging opportunity for researchers to gain access to larger samples, which can be queried over several years longitudinally. Among adolescents, young adolescents (ages 11 to 13 years) are of particular interest to clinicians as this is a transitional stage, during which depressive and anxiety symptoms often emerge. However, it remains unclear whether these youngest adolescents can accurately answer questions about their mental well-being using a Web-based platform. The aim of the study was to examine the accuracy of responses obtained from Web-based questionnaires by comparing Web-based with paper-and-pencil versions of depression and anxiety questionnaires. The primary outcome was the score on the depression and anxiety questionnaires under two conditions: (1) paper-and-pencil and (2) Web-based versions. Twenty-eight adolescents (aged 11-13 years, mean age 12.78 years and SD 0.78; 18 females, 64%) were randomly assigned to complete either the paper-and-pencil or the Web-based questionnaire first. Intraclass correlation coefficients (ICCs) were calculated to measure intrarater reliability. Intraclass correlation coefficients were calculated separately for depression (Children's Depression Inventory, CDI) and anxiety (Spence Children's Anxiety Scale, SCAS) questionnaires. On average, it took participants 17 minutes (SD 6) to answer 116 questions online. Intraclass correlation coefficient analysis revealed high intrarater reliability when comparing Web-based with paper-and-pencil responses for both CDI (ICC=.88; P<.001) and the SCAS (ICC=.95; P<.001). According to published criteria, both of these values are in the "almost perfect" category indicating the highest degree of reliability. The results of the study show an excellent reliability of Web-based assessment in 11- to 13-year-old children as compared with the standard paper-pencil assessment. Furthermore, we found that Web

  16. A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning.

    PubMed

    Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui

    2016-05-25

    In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles.

  17. A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning

    PubMed Central

    Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui

    2016-01-01

    In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles. PMID:27231917

  18. 75 FR 71613 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... Reliability Standards. The proposed Reliability Standards were designed to prevent instability, uncontrolled... Reliability Standards.\\2\\ The proposed Reliability Standards were designed to prevent instability... the SOLs, which if exceeded, could expose a widespread area of the bulk electric system to instability...

  19. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those

  20. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  1. Reliability Growth and Its Applications to Dormant Reliability

    DTIC Science & Technology

    1981-12-01

    ability to make projection about future reli- ability (Rof 9:41-42). Barlow and Scheuer Model. Richard E. Barlow and Ernest M. Sch~uvr, of the University...Reliability Growth Prediction Models," Operations Research, 18(l):S2-6S (January/February 1970). 7. Bauer, John, William Hadley, and Robert Dietz... Texarkana , Texas, May 1973. (AD 768 119). 10. Bonis, Austin J. "Reliability Growth Curves for One Shot Devices," Proceedings 1977 Annual Reliability and

  2. Reliability analysis of airship remote sensing system

    NASA Astrophysics Data System (ADS)

    Qin, Jun

    1998-08-01

    Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

  3. Oximeter reliability in a subzero environment.

    PubMed

    Macnab, A J; Smith, M; Phillips, N; Smart, P

    1996-11-01

    Pulse oximeters optimize care in the pre-hospital setting. As British Columbia ambulance teams often provide care in subzero temperatures, we conducted a study to determine the reliability of 3 commercially-available portable oximeters in a subzero environment. We hypothesized that there is no significant difference between SaO2 readings obtained using a pulse oximeter at room temperature and a pulse oximeter operating at sub-zero temperatures. Subjects were stable normothermic children in intensive care on Hewlett Packard monitors (control unit) at room temperature. The test units were packed in dry ice in an insulated bin (temperature - 15 degrees C to -30 degrees C) and their sensors placed on the subjects, contralateral to the control sensors. Data were collected simultaneously from test and control units immediately following validation of control unit values by co-oximetry (blood gas). No data were unacceptable. Two units (Propaq 106EC and Nonin 8500N) functioned well to < -15 degrees C, providing data comparable to those obtained from the control unit (p < 0.001). The Siemens Micro O2 did not function at the temperatures tested. Monitor users who require equipment to function in subzero environments (military, Coast Guard, Mountain Rescue) should ensure that function is reliable, and could test units using this method.

  4. Calibration Adjustment of the Mid-infrared Analyzer for an Accurate Determination of the Macronutrient Composition of Human Milk.

    PubMed

    Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves

    2016-08-01

    Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.

  5. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  6. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  7. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  8. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  9. Reliable Gene Expression Measurements from Fine Needle Aspirates of Pancreatic Tumors

    PubMed Central

    Anderson, Michelle A.; Brenner, Dean E.; Scheiman, James M.; Simeone, Diane M.; Singh, Nalina; Sikora, Matthew J.; Zhao, Lili; Mertens, Amy N.; Rae, James M.

    2010-01-01

    Background and aims: Biomarker use for pancreatic cancer diagnosis has been impaired by a lack of samples suitable for reliable quantitative RT-PCR (qRT-PCR). Fine needle aspirates (FNAs) from pancreatic masses were studied to define potential causes of RNA degradation and develop methods for accurately measuring gene expression. Methods: Samples from 32 patients were studied. RNA degradation was assessed by using a multiplex PCR assay for varying lengths of glyceraldehyde-3-phosphate dehydrogenase, and effects on qRT-PCR were determined by using a 150-bp and a 80-bp amplicon for RPS6. Potential causes of and methods to circumvent RNA degradation were studied by using FNAs from a pancreatic cancer xenograft. Results: RNA extracted from pancreatic mass FNAs was extensively degraded. Fragmentation was related to needle bore diameter and could not be overcome by alterations in aspiration technique. Multiplex PCR for glyceraldehyde-3-phosphate dehydrogenase could distinguish samples that were suitable for qRT-PCR. The use of short PCR amplicons (<100 bp) provided reliable gene expression analysis from FNAs. When appropriate samples were used, the assay was highly reproducible for gene copy number with minimal (0.0003 or about 0.7% of total) variance. Conclusions: The degraded properties of endoscopic FNAs markedly affect the accuracy of gene expression measurements. Our novel approach to designate specimens “informative” for qRT-PCR allowed accurate molecular assessment for the diagnosis of pancreatic diseases. PMID:20709792

  10. Measurement Error in Multilevel Models of School and Classroom Environments: Implications for Reliability, Precision, and Prediction. CRESST Report 828

    ERIC Educational Resources Information Center

    Schweig, Jonathan

    2013-01-01

    Measuring school and classroom environments has become central in a nation-wide effort to develop comprehensive programs that measure teacher quality and teacher effectiveness. Formulating successful programs necessitates accurate and reliable methods for measuring these environmental variables. This paper uses a generalizability theory framework…

  11. A Cost-Effective Transparency-Based Digital Imaging for Efficient and Accurate Wound Area Measurement

    PubMed Central

    Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng

    2012-01-01

    Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449

  12. Reliability and validity of the combined heart rate and movement sensor Actiheart.

    PubMed

    Brage, S; Brage, N; Franks, P W; Ekelund, U; Wareham, N J

    2005-04-01

    Accurate quantification of physical activity energy expenditure is a key part of the effort to understand disorders of energy metabolism. The Actiheart, a combined heart rate (HR) and movement sensor, is designed to assess physical activity in populations. To examine aspects of Actiheart reliability and validity in mechanical settings and during walking and running. In eight Actiheart units, technical reliability (coefficients of variation, CV) and validity for movement were assessed with sinusoid accelerations (0.1-20 m/s(2)) and for HR by simulated R-wave impulses (25-250 bpm). Agreement between Actiheart and ECG was determined during rest and treadmill locomotion (3.2-12.1 km/h). Walking and running intensity (in J/min/kg) was assessed with indirect calorimetry in 11 men and nine women (26-50 y, 20-29 kg/m(2)) and modelled from movement, HR, and movement + HR by multiple linear regression, adjusting for sex. Median intrainstrument CV was 0.5 and 0.03% for movement and HR, respectively. Corresponding interinstrument CV values were 5.7 and 0.03% with some evidence of heteroscedasticity for movement. The linear relationship between movement and acceleration was strong (R(2) = 0.99, P < 0.001). Simulated R-waves were detected within 1 bpm from 30 to 250 bpm. The 95% limits of agreement between Actiheart and ECG were -4.2 to 4.3 bpm. Correlations with intensity were generally high (R(2) > 0.84, P < 0.001) but significantly highest when combining HR and movement (SEE < 1 MET). The Actiheart is technically reliable and valid. Walking and running intensity may be estimated accurately but further studies are needed to assess validity in other activities and during free-living. The study received financial support from the Wellcome Trust and SB was supported by a scholarship from Unilever, UK.

  13. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  14. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  15. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  16. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    PubMed

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for

  17. Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).

    PubMed

    Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K

    2013-02-01

    We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.

  18. Reliable Breakdown Obtained in Silicon Carbide Rectifiers

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.

    1997-01-01

    The High Temperature Integrated Electronics and Sensor (HTIES) Program at the NASA Lewis Research Center is currently developing silicon carbide (SiC) for use in harsh conditions where silicon, the semiconductor used in nearly all of today's electronics, cannot function. Silicon carbide's demonstrated ability to function under extreme high-temperature, high-power, and/or high-radiation conditions will enable significant improvements to a far-ranging variety of applications and systems. These range from improved high-voltage switching for energy savings in public electric power distribution and electric vehicles, to more powerful microwave electronics for radar and cellular communications, to sensor and controls for cleaner-burning, more fuel-efficient jet aircraft and automobile engines.

  19. Rapid, cost-effective and accurate quantification of Yucca schidigera Roezl. steroidal saponins using HPLC-ELSD method.

    PubMed

    Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona

    2017-04-15

    Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. The role of test-retest reliability in measuring individual and group differences in executive functioning.

    PubMed

    Paap, Kenneth R; Sawi, Oliver

    2016-12-01

    Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  2. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  3. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  4. Reliable contact fabrication on nanostructured Bi2Te3-based thermoelectric materials.

    PubMed

    Feng, Shien-Ping; Chang, Ya-Huei; Yang, Jian; Poudel, Bed; Yu, Bo; Ren, Zhifeng; Chen, Gang

    2013-05-14

    A cost-effective and reliable Ni-Au contact on nanostructured Bi2Te3-based alloys for a solar thermoelectric generator (STEG) is reported. The use of MPS SAMs creates a strong covalent binding and more nucleation sites with even distribution for electroplating contact electrodes on nanostructured thermoelectric materials. A reliable high-performance flat-panel STEG can be obtained by using this new method.

  5. Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, S.

    2013-07-01

    Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less

  6. Classification of maltreatment-related mortality by Child Death Review teams: How reliable are they?

    PubMed

    Parrish, Jared W; Schnitzer, Patricia G; Lanier, Paul; Shanahan, Meghan E; Daniels, Julie L; Marshall, Stephen W

    2017-05-01

    Accurate estimation of the incidence of maltreatment-related child mortality depends on reliable child fatality review. We examined the inter-rater reliability of maltreatment designation for two Alaskan Child Death Review (CDR) panels. Two different multidisciplinary CDR panels each reviewed a series of 101 infant and child deaths (ages 0-4 years) in Alaska. Both panels independently reviewed identical medical, autopsy, law enforcement, child welfare, and administrative records for each death utilizing the same maltreatment criteria. Percent agreement for maltreatment was 64.7% with a weighted Kappa of 0.61 (95% CI 0.51, 0.70). Across maltreatment subtypes, agreement was highest for abuse (69.3%) and lowest for negligence (60.4%). Discordance was higher if the mother was unmarried or a smoker, if residence was rural, or if there was a family history of child protective services report(s). Incidence estimates did not depend on which panel's data were used. There is substantial room for improvement in the reliability of CDR panel assessment of maltreatment related mortality. Standardized decision guidance for CDR panels may improve the reliability of their data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  8. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  9. Accurate bond energies of hydrocarbons from complete basis set extrapolated multi-reference singles and doubles configuration interaction.

    PubMed

    Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A

    2011-12-09

    Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Reliability of primary caregivers reports on lifestyle behaviours of European pre-school children: the ToyBox-study.

    PubMed

    González-Gil, E M; Mouratidou, T; Cardon, G; Androutsos, O; De Bourdeaudhuij, I; Góźdź, M; Usheva, N; Birnbaum, J; Manios, Y; Moreno, L A

    2014-08-01

    Reliable assessments of health-related behaviours are necessary for accurate evaluation on the efficiency of public health interventions. The aim of the current study was to examine the reliability of a self-administered primary caregivers questionnaire (PCQ) used in the ToyBox-intervention. The questionnaire consisted of six sections addressing sociodemographic and perinatal factors, water and beverages consumption, physical activity, snacking and sedentary behaviours. Parents/caregivers from six countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain) were asked to complete the questionnaire twice within a 2-week interval. A total of 93 questionnaires were collected. Test-retest reliability was assessed using intra-class correlation coefficient (ICC). Reliability of the six questionnaire sections was assessed. A stronger agreement was observed in the questions addressing sociodemographic and perinatal factors as opposed to questions addressing behaviours. Findings showed that 92% of the ToyBox PCQ had a moderate-to-excellent test-retest reliability (defined as ICC values from 0.41 to 1) and less than 8% poor test-retest reliability (ICC < 0.40). Out of the total ICC values, 67% showed good-to-excellent reliability (ICC from 0.61 to 1). We conclude that the PCQ is a reliable tool to assess sociodemographic characteristics, perinatal factors and lifestyle behaviours of pre-school children and their families participating in the ToyBox-intervention. © 2014 World Obesity.

  11. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  12. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    PubMed

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.

  13. A review of the liquid metal diffusion data obtained from the space shuttle endeavour mission STS-47 and the space shuttle columbia mission STS-52

    NASA Astrophysics Data System (ADS)

    Shirkhanzadeh, Morteza

    Accurate data of liquid-phase solute diffusion coefficients are required to validate the condensed -matter physics theories. However, the required data accuracy to discriminate between com-peting theoretical models is 1 to 2 percent(1). Smith and Scott (2) have recently used the measured values of diffusion coefficients for Pb-Au in microgravity to validate the theoretical values of the diffusion coefficients derived from molecular dynamics simulations and several Enskog hard sphere models. The microgravity data used was obtained from the liquid diffusion experiments conducted on board the Space Shuttle Endeavour (mission STS-47) and the Space Shuttle Columbia (mission STS-52). Based on the analysis of the results, it was claimed that the measured values of diffusion coefficients were consistent with the theoretical results and that the data fit a linear relationship with a slope slightly greater than predicted by the molecular dynamics simulations. These conclusions, however, contradict the claims made in previous publications (3-5) where it was reported that the microgravity data obtained from the shuttle experiments fit the fluctuation theory (D proportional to T2). A thorough analysis of data will be presented to demonstrate that the widely-reported micro-gravity results obtained from shuttle experiments are not reliable and sufficiantly accurate to discriminate between competing theoretical models. References: 1. J.P. Garandet, G. Mathiak, V. Botton, P. Lehmann and A. Griesche, Int. J. Thermophysics, 25, 249 (2004). 2.P.J. Scott and R.W. Smith, J. Appl. Physics 104, 043706 (2008). 3. R.W. Smith, Microgravity Sci. Technol. XI (2) 78-84 (1998). 4.Smith et al, Ann. N.Y. Acad. Sci. 974:56-67 (2002) (retracted). 5.R.A. Herring et al, J. Jpn. Soc. Microgravity Appl., Vol.16, 234-244 (1999).

  14. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.

    1991-01-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  20. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  2. 78 FR 41339 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ...] Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards AGENCY: Federal... Reliability Standards identified by the North American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. FOR FURTHER INFORMATION CONTACT: Kevin Ryan (Legal Information...

  3. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  4. Reliability of visual and instrumental color matching.

    PubMed

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in

  5. Metamemory monitoring in mild cognitive impairment: Evidence of a less accurate episodic feeling-of-knowing.

    PubMed

    Perrotin, Audrey; Belleville, Sylvie; Isingrini, Michel

    2007-09-20

    This study aimed at exploring metamemory and specifically the accuracy of memory monitoring in mild cognitive impairment (MCI) using an episodic memory feeling-of-knowing (FOK) procedure. To this end, 20 people with MCI and 20 matched control participants were compared on the episodic FOK task. Results showed that the MCI group made less accurate FOK predictions than the control group by overestimating their memory performance on a recognition task. The MCI overestimation behavior was found to be critically related to the severity of their cognitive decline. In the light of recent neuroanatomical models showing the involvement of a temporal-frontal network underlying accurate FOK predictions, the role of memory and executive processes was evaluated. Thus, participants were also administered memory and executive neuropsychological tests. Correlation analysis revealed a between-group differential pattern indicating that FOK accuracy was primarily related to memory abilities in people with MCI, whereas it was specifically related to executive functioning in control participants. The lesser ability of people with MCI to assess their memory status accurately on an episodic FOK task is discussed in relation to both their subjective memory complaints and to their actual memory deficits which might be mediated by the brain vulnerability of their hippocampus and medial temporal system. It is suggested that their memory weakness may lead people with MCI to use other less reliable forms of memory monitoring.

  6. Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1994-01-01

    The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.

  7. [Reliability and validity of the Braden Scale for predicting pressure sore risk].

    PubMed

    Boes, C

    2000-12-01

    For more accurate and objective pressure sore risk assessment various risk assessment tools were developed mainly in the USA and Great Britain. The Braden Scale for Predicting Pressure Sore Risk is one such example. By means of a literature analysis of German and English texts referring to the Braden Scale the scientific control criteria reliability and validity will be traced and consequences for application of the scale in Germany will be demonstrated. Analysis of 4 reliability studies shows an exclusive focus on interrater reliability. Further, even though examination of 19 validity studies occurs in many different settings, such examination is limited to the criteria sensitivity and specificity (accuracy). The range of sensitivity and specificity level is 35-100%. The recommended cut off points rank in the field of 10 to 19 points. The studies prove to be not comparable with each other. Furthermore, distortions in these studies can be found which affect accuracy of the scale. The results of the here presented analysis show an insufficient proof for reliability and validity in the American studies. In Germany, the Braden scale has not yet been tested under scientific criteria. Such testing is needed before using the scale in different German settings. During the course of such testing, construction and study procedures of the American studies can be used as a basis as can the problems be identified in the analysis presented below.

  8. A carbon CT system: how to obtain accurate stopping power ratio using a Bragg peak reduction technique

    NASA Astrophysics Data System (ADS)

    Lee, Sung Hyun; Sunaguchi, Naoki; Hirano, Yoshiyuki; Kano, Yosuke; Liu, Chang; Torikoshi, Masami; Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki

    2018-02-01

    In this study, we investigate the performance of the Gunma University Heavy Ion Medical Center’s ion computed tomography (CT) system, which measures the residual range of a carbon-ion beam using a fluoroscopy screen, a charge-coupled-device camera, and a moving wedge absorber and collects CT reconstruction images from each projection angle. Each 2D image was obtained by changing the polymethyl methacrylate (PMMA) thickness, such that all images for one projection could be expressed as the depth distribution in PMMA. The residual range as a function of PMMA depth was related to the range in water through a calibration factor, which was determined by comparing the PMMA-equivalent thickness measured by the ion CT system to the water-equivalent thickness measured by a water column. Aluminium, graphite, PMMA, and five biological phantoms were placed in a sample holder, and the residual range for each was quantified simultaneously. A novel method of CT reconstruction to correct for the angular deflection of incident carbon ions in the heterogeneous region utilising the Bragg peak reduction (BPR) is also introduced in this paper, and its performance is compared with other methods present in the literature such as the decomposition and differential methods. Stopping power ratio values derived with the BPR method from carbon-ion CT images matched closely with the true water-equivalent length values obtained from the validation slab experiment.

  9. The need for obtaining accurate nationwide estimates of diabetes prevalence in India - Rationale for a national study on diabetes

    PubMed Central

    Anjana, R.M.; Ali, M.K.; Pradeepa, R.; Deepa, M.; Datta, M.; Unnikrishnan, R.; Rema, M.; Mohan, V.

    2011-01-01

    According to the World Diabetes Atlas, India is projected to have around 51 million people with diabetes. However, these data are based on small sporadic studies done in some parts of the country. Even a few multi-centre studies that have been done, have several limitations. Also, marked heterogeneity between States limits the generalizability of results. Other studies done at various time periods also lack uniform methodology, do not take into consideration ethnic differences and have inadequate coverage. Thus, till date there has been no national study on the prevalence of diabetes which are truly representative of India as a whole. Moreover, the data on diabetes complications is even more scarce. Therefore, there is an urgent need for a large well-planned national study, which could provide reliable nationwide data, not only on prevalence of diabetes, but also on pre-diabetes, and the complications of diabetes in India. A study of this nature will have enormous public health impact and help policy makers to take action against diabetes in India. PMID:21537089

  10. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    DTIC Science & Technology

    2005-09-01

    develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic

  11. Visual judgements of steadiness in one-legged stance: reliability and validity.

    PubMed

    Haupstein, T; Goldie, P

    2000-01-01

    There is a paucity of information about the validity and reliability of clinicians' visual judgements of steadiness in one-legged stance. Such judgements are used frequently in clinical practice to support decisions about treatment in the fields of neurology, sports medicine, paediatrics and orthopaedics. The aim of the present study was to address the validity and reliability of visual judgements of steadiness in one-legged stance in a group of physiotherapists. A videotape of 20 five-second performances was shown to 14 physiotherapists with median clinical experience of 6.75 years. Validity of visual judgement was established by correlating scores obtained from an 11-point rating scale with criterion scores obtained from a force platform. In addition, partial correlations were used to control for the potential influence of body weight on the relationship between the visual judgements and criterion scores. Inter-observer reliability was quantified between the physiotherapists; intra-observer reliability was quantified between two tests four weeks apart. Mean criterion-related validity was high, regardless of whether body weight was controlled for statistically (Pearson's r = 0.84, 0.83, respectively). The standard error of estimating the criterion score was 3.3 newtons. Inter-observer reliability was high (ICC (2,1) = 0.81 at Test 1 and 0.82 at Test 2). Intra-observer reliability was high (on average ICC (2,1) = 0.88; Pearson's r = 0.90). The standard error of measurement for the 11-point scale was one unit. The finding of higher accuracy of making visual judgements than previously reported may be due to several aspects of design: use of a criterion score derived from the variability of the force signal which is more discriminating than variability of centre of pressure; use of a discriminating visual rating scale; specificity and clear definition of the phenomenon to be rated.

  12. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  13. Analysis of the Quality of Information Obtained About Uterine Artery Embolization From the Internet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavare, Aniket N.; Alsafi, Ali, E-mail: ali.alsafi03@imperial.ac.uk; Hamady, Mohamad S.

    Purpose: The Internet is widely used by patients to source health care-related information. We sought to analyse the quality of information available on the Internet about uterine artery embolization (UAE). Materials and Methods: We searched three major search engines for the phrase 'uterine artery embolization' and compiled the top 50 results from each engine. After excluding repeated sites, scientific articles, and links to documents, the remaining 50 sites were assessed using the LIDA instrument, which scores sites across the domains of accessibility, usability, and reliability. The Fleisch reading ease score (FRES) was calculated for each of the sites. Finally, wemore » checked the country of origin and the presence of certification by the Health On the Net Foundation (HONcode) as well as their effect on LIDA and FRES scores.ResultsThe following mean scores were obtained: accessibility 48/60 (80%), usability 42/54 (77%), reliability 20/51 (39%), total LIDA 110/165 (67%), and FRES 42/100 (42%). Nine sites had HONcode certification, and this was associated with significantly greater (p < 0.05) reliability and total LIDA and FRES scores. When comparing sites between United Kingdom and United States, there was marked variation in the quality of results obtained when searching for information on UAE (p < 0.05). Conclusion: In general, sites were well designed and easy to use. However, many scored poorly on the reliability of their information either because they were produced in a non-evidence-based way or because they lacking currency. It is important that patients are guided to reputable, location-specific sources of information online, especially because prominent search engine rank does not guarantee reliability of information.« less

  14. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  15. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  16. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    NASA Astrophysics Data System (ADS)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  17. Reliability and validity of a physical activity scale among urban pregnant women in eastern China.

    PubMed

    Jiang, Hong; He, Gengsheng; Li, Mu; Fan, Yanyan; Jiang, Hongyi; Bauman, Adrian; Qian, Xu

    2015-03-01

    This study aimed to determine the reliability and validity of the physical activity scale adapted from a Danish scale for assessing physical activity among urban pregnant women in eastern China. Participants recruited in an urban setting of eastern China were asked to complete the physical activity scale, the activity diary, and to wear a pedometer for the same 4 days, followed by repeating the activity scale for another 4 days within 2 weeks. A total of 109 pregnant women completed data recording. Good reliability of the physical activity scale was observed (intraclass correlation coefficient = .87). There was also a good comparability between the activity scale and the activity diary (Spearman's r = .75 for total energy expenditure). The agreement between the scale and pedometer reading was acceptable (Spearman's r = .45). The adapted physical activity scale is a reliable and reasonably accurate instrument for estimating physical activity among urban pregnant women in eastern China. © 2012 APJPH.

  18. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  19. The contribution of an asthma diagnostic consultation service in obtaining an accurate asthma diagnosis for primary care patients: results of a real-life study.

    PubMed

    Gillis, R M E; van Litsenburg, W; van Balkom, R H; Muris, J W; Smeenk, F W

    2017-05-19

    Previous studies showed that general practitioners have problems in diagnosing asthma accurately, resulting in both under and overdiagnosis. To support general practitioners in their diagnostic process, an asthma diagnostic consultation service was set up. We evaluated the performance of this asthma diagnostic consultation service by analysing the (dis)concordance between the general practitioners working hypotheses and the asthma diagnostic consultation service diagnoses and possible consequences this had on the patients' pharmacotherapy. In total 659 patients were included in this study. At this service the patients' medical history was taken and a physical examination and a histamine challenge test were carried out. We compared the general practitioners working hypotheses with the asthma diagnostic consultation service diagnoses and the change in medication that was incurred. In 52% (n = 340) an asthma diagnosis was excluded. The diagnosis was confirmed in 42% (n = 275). Furthermore, chronic rhinitis was diagnosed in 40% (n = 261) of the patients whereas this was noted in 25% (n = 163) by their general practitioner. The adjusted diagnosis resulted in a change of medication for more than half of all patients. In 10% (n = 63) medication was started because of a new asthma diagnosis. The 'one-stop-shop' principle was met with 53% of patients and 91% (n = 599) were referred back to their general practitioner, mostly within 6 months. Only 6% (n = 41) remained under control of the asthma diagnostic consultation service because of severe unstable asthma. In conclusion, the asthma diagnostic consultation service helped general practitioners significantly in setting accurate diagnoses for their patients with an asthma hypothesis. This may contribute to diminish the problem of over and underdiagnosis and may result in more appropriate treatment regimens. SERVICE HELPS GENERAL PRACTITIONERS MAKE ACCURATE DIAGNOSES: A consultation service can

  20. Tactile Acuity Charts: A Reliable Measure of Spatial Acuity

    PubMed Central

    Bruns, Patrick; Camargo, Carlos J.; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R.; Röder, Brigitte

    2014-01-01

    For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds. PMID:24504346

  1. Validity and reliability of the Self-Reported Physical Fitness (SRFit) survey.

    PubMed

    Keith, NiCole R; Clark, Daniel O; Stump, Timothy E; Miller, Douglas K; Callahan, Christopher M

    2014-05-01

    An accurate physical fitness survey could be useful in research and clinical care. To estimate the validity and reliability of a Self-Reported Fitness (SRFit) survey; an instrument that estimates muscular fitness, flexibility, cardiovascular endurance, BMI, and body composition (BC) in adults ≥ 40 years of age. 201 participants completed the SF-36 Physical Function Subscale, International Physical Activity Questionnaire (IPAQ), Older Adults' Desire for Physical Competence Scale (Rejeski), the SRFit survey, and the Rikli and Jones Senior Fitness Test. BC, height and weight were measured. SRFit survey items described BC, BMI, and Senior Fitness Test movements. Correlations between the Senior Fitness Test and the SRFit survey assessed concurrent validity. Cronbach's Alpha measured internal consistency within each SRFit domain. SRFit domain scores were compared with SF-36, IPAQ, and Rejeski survey scores to assess construct validity. Intraclass correlations evaluated test-retest reliability. Correlations between SRFit and the Senior Fitness Test domains ranged from 0.35 to 0.79. Cronbach's Alpha scores were .75 to .85. Correlations between SRFit and other survey scores were -0.23 to 0.72 and in the expected direction. Intraclass correlation coefficients were 0.79 to 0.93. All P-values were 0.001. Initial evaluation supports the SRFit survey's validity and reliability.

  2. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  3. Identification and evaluation of reliable reference genes for quantitative real-time PCR analysis in tea plant (Camellia sinensis (L.) O. Kuntze)

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time polymerase chain reaction (qRT-PCR) is a commonly used technique for measuring gene expression levels due to its simplicity, specificity, and sensitivity. Reliable reference selection for the accurate quantification of gene expression under various experimental conditions is a...

  4. Validity and reliability of the de Morton Mobility Index in the subacute hospital setting in a geriatric evaluation and management population.

    PubMed

    de Morton, Natalie A; Lane, Kylie

    2010-11-01

    To investigate the clinimetric properties of the de Morton Mobility Index (DEMMI) in a Geriatric Evaluation and Management (GEM) population. A longitudinal validation study (n = 100) and inter-rater reliability study (n = 29) in a GEM population. Consecutive patients admitted to a GEM rehabilitation ward were eligible for inclusion. At hospital admission and discharge, a physical therapist assessed patients with physical performance instruments that included the 6-metre walk test, step test, Clinical Test of Sensory Organization and Balance, Timed Up and Go test, 6-minute walk test and the DEMMI. Consecutively eligible patients were included in an inter-rater reliability study between physical therapists. DEMMI admission scores were normally distributed (mean 30.2, standard deviation 16.7) and other activity limitation instruments had either a floor or a ceiling effect. Evidence of convergent, discriminant and known groups validity for the DEMMI were obtained. The minimal detectable change with 90% confidence was 10.5 (95% confidence interval 6.1-17.9) points and the minimally clinically important difference was 8.4 points on the 100-point interval DEMMI scale. The DEMMI provides clinicians with an accurate and valid method of measuring mobility for geriatric patients in the subacute hospital setting.

  5. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  6. Kinetic approach to degradation mechanisms in polymer solar cells and their accurate lifetime predictions

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad Azeem; Maaroufi, AbdelKrim

    2018-07-01

    A beginning has been made in the present study regarding the accurate lifetime predictions of polymer solar cells. Certain reservations about the conventionally employed temperature accelerated lifetime measurements test for its unworthiness of predicting reliable lifetimes of polymer solar cells are brought into light. Critical issues concerning the accelerated lifetime testing include, assuming reaction mechanism instead of determining it, and relying solely on the temperature acceleration of a single property of material. An advanced approach comprising a set of theoretical models to estimate the accurate lifetimes of polymer solar cells is therefore suggested in order to suitably alternate the accelerated lifetime testing. This approach takes into account systematic kinetic modeling of various possible polymer degradation mechanisms under natural weathering conditions. The proposed kinetic approach is substantiated by its applications on experimental aging data-sets of polymer solar materials/solar cells including, P3HT polymer film, bulk heterojunction (MDMO-PPV:PCBM) and dye-sensitized solar cells. Based on the suggested approach, an efficacious lifetime determination formula for polymer solar cells is derived and tested on dye-sensitized solar cells. Some important merits of the proposed method are also pointed out and its prospective applications are discussed.

  7. Simulation of Swap-Out Reliability For The Advance Photon Source Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    2017-06-01

    The proposed upgrade of the Advanced Photon Source (APS) to a multibend-achromat lattice relies on the use of swap-out injection to accommodate the small dynamic acceptance, allow use of unusual insertion devices, and minimize collective effects at high single-bunch charge. This, combined with the short beam lifetime, will make injector reliability even more important than it is for top-up operation. We used historical data for the APS injector complex to obtain probability distributions for injector up-time and down-time durations. Using these distributions, we simulated several years of swap-out operation for the upgraded lattice for several operatingmodes. The results indicate thatmore » obtaining very high availability of beam in the storage ring will require improvements to injector reliability.« less

  8. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  9. Reliable differentiation of Meyerozyma guilliermondii from Meyerozyma caribbica by internal transcribed spacer restriction fingerprinting.

    PubMed

    Romi, Wahengbam; Keisam, Santosh; Ahmed, Giasuddin; Jeyaram, Kumaraswamy

    2014-02-28

    Meyerozyma guilliermondii (anamorph Candida guilliermondii) and Meyerozyma caribbica (anamorph Candida fermentati) are closely related species of the genetically heterogenous M. guilliermondii complex. Conventional phenotypic methods frequently misidentify the species within this complex and also with other species of the Saccharomycotina CTG clade. Even the long-established sequencing of large subunit (LSU) rRNA gene remains ambiguous. We also faced similar problem during identification of yeast isolates of M. guilliermondii complex from indigenous bamboo shoot fermentation in North East India. There is a need for development of reliable and accurate identification methods for these closely related species because of their increasing importance as emerging infectious yeasts and associated biotechnological attributes. We targeted the highly variable internal transcribed spacer (ITS) region (ITS1-5.8S-ITS2) and identified seven restriction enzymes through in silico analysis for differentiating M. guilliermondii from M. caribbica. Fifty five isolates of M. guilliermondii complex which could not be delineated into species-specific taxonomic ranks by API 20 C AUX and LSU rRNA gene D1/D2 sequencing were subjected to ITS-restriction fragment length polymorphism (ITS-RFLP) analysis. TaqI ITS-RFLP distinctly differentiated the isolates into M. guilliermondii (47 isolates) and M. caribbica (08 isolates) with reproducible species-specific patterns similar to the in silico prediction. The reliability of this method was validated by ITS1-5.8S-ITS2 sequencing, mitochondrial DNA RFLP and electrophoretic karyotyping. We herein described a reliable ITS-RFLP method for distinct differentiation of frequently misidentified M. guilliermondii from M. caribbica. Even though in silico analysis differentiated other closely related species of M. guilliermondii complex from the above two species, it is yet to be confirmed by in vitro analysis using reference strains. This method can be used

  10. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  11. High Resolution Melting as a rapid, reliable, accurate and cost-effective emerging tool for genotyping pathogenic bacteria and enhancing molecular epidemiological surveillance: a comprehensive review of the literature.

    PubMed

    Tamburro, M; Ripabelli, G

    2017-01-01

    Rapid, reliable and accurate molecular typing methods are essential for outbreaks detection and infectious diseases control, for monitoring the evolution and dynamics of microbial populations, and for effective epidemiological surveillance. The introduction of a novel method based on the analysis of melting temperature of amplified products, known as High Resolution Melting (HRM) since 2002, has found applications in epidemiological studies, either for identification of bacterial species or molecular typing, as well as an extensive and increasing use in many research fields. HRM method is based on the use of saturating third generation dyes, advanced real-time PCR platforms, and bioinformatics tools. To describe, by a comphrehensive review of the literature, the use, application and usefulness of HRM for the genotyping of bacterial pathogens in the context of epidemiological surveillance and public health. A literature search was carried out during July-August 2016, by consulting the biomedical databases PubMed/Medline, Scopus, EMBASE, and ISI Web of Science without limits. The search strategy was performed according to the following keywords: high resolution melting analysis and bacteria and genotyping or molecular typing. All the articles evaluating the application of HRM for bacterial pathogen genotyping were selected and reviewed, taking into account the objective of each study, the rationale explaining the use of this technology, and the main results obtained in comparison with gold standards and/or alternative methods, when available. HRM method was extensively used for molecular typing of both Gram-positive and Gram-negative bacterial pathogens, representing a versatile genetic tool: a) to evaluate genetic diversity and subtype at species/subspecies level, based also on allele discrimination/identification and mutation screening; b) to recognize phylogenetic groupings (lineage, sublineage, subgroups); c) to identify antimicrobial resistance; d) to detect and

  12. Cheap but accurate calculation of chemical reaction rate constants from ab initio data, via system-specific, black-box force fields

    NASA Astrophysics Data System (ADS)

    Steffen, Julien; Hartke, Bernd

    2017-10-01

    Building on the recently published quantum-mechanically derived force field (QMDFF) and its empirical valence bond extension, EVB-QMDFF, it is now possible to generate a reliable potential energy surface for any given elementary reaction step in an essentially black box manner. This requires a limited and pre-defined set of reference data near the reaction path and generates an accurate approximation of the reference potential energy surface, on and off the reaction path. This intermediate representation can be used to generate reaction rate data, with far better accuracy and reliability than with traditional approaches based on transition state theory (TST) or variational extensions thereof (VTST), even if those include sophisticated tunneling corrections. However, the additional expense at the reference level remains very modest. We demonstrate all this for three arbitrarily chosen example reactions.

  13. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

  14. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  15. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  16. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  17. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  18. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  19. Accurate Valence Ionization Energies from Kohn-Sham Eigenvalues with the Help of Potential Adjustors.

    PubMed

    Thierbach, Adrian; Neiss, Christian; Gallandi, Lukas; Marom, Noa; Körzdörfer, Thomas; Görling, Andreas

    2017-10-10

    An accurate yet computationally very efficient and formally well justified approach to calculate molecular ionization potentials is presented and tested. The first as well as higher ionization potentials are obtained as the negatives of the Kohn-Sham eigenvalues of the neutral molecule after adjusting the eigenvalues by a recently [ Görling Phys. Rev. B 2015 , 91 , 245120 ] introduced potential adjustor for exchange-correlation potentials. Technically the method is very simple. Besides a Kohn-Sham calculation of the neutral molecule, only a second Kohn-Sham calculation of the cation is required. The eigenvalue spectrum of the neutral molecule is shifted such that the negative of the eigenvalue of the highest occupied molecular orbital equals the energy difference of the total electronic energies of the cation minus the neutral molecule. For the first ionization potential this simply amounts to a ΔSCF calculation. Then, the higher ionization potentials are obtained as the negatives of the correspondingly shifted Kohn-Sham eigenvalues. Importantly, this shift of the Kohn-Sham eigenvalue spectrum is not just ad hoc. In fact, it is formally necessary for the physically correct energetic adjustment of the eigenvalue spectrum as it results from ensemble density-functional theory. An analogous approach for electron affinities is equally well obtained and justified. To illustrate the practical benefits of the approach, we calculate the valence ionization energies of test sets of small- and medium-sized molecules and photoelectron spectra of medium-sized electron acceptor molecules using a typical semilocal (PBE) and two typical global hybrid functionals (B3LYP and PBE0). The potential adjusted B3LYP and PBE0 eigenvalues yield valence ionization potentials that are in very good agreement with experimental values, reaching an accuracy that is as good as the best G 0 W 0 methods, however, at much lower computational costs. The potential adjusted PBE eigenvalues result in

  20. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  1. Electronic device for endosurgical skills training (EDEST): study of reliability.

    PubMed

    Pagador, J B; Uson, J; Sánchez, M A; Moyano, J L; Moreno, J; Bustos, P; Mateos, J; Sánchez-Margallo, F M

    2011-05-01

    Minimally Invasive Surgery procedures are commonly used in many surgical practices, but surgeons need specific training models and devices due to its difficulty and complexity. In this paper, an innovative electronic device for endosurgical skills training (EDEST) is presented. A study on reliability for this device was performed. Different electronic components were used to compose this new training device. The EDEST was focused on two basic laparoscopic tasks: triangulation and coordination manoeuvres. A configuration and statistical software was developed to complement the functionality of the device. A calibration method was used to assure the proper work of the device. A total of 35 subjects (8 experts and 27 novices) were used to check the reliability of the system using the MTBF analysis. Configuration values for triangulation and coordination exercises were calculated as 0.5 s limit threshold and 800-11,000 lux range of light intensity, respectively. Zero errors in 1,050 executions (0%) for triangulation and 21 errors in 5,670 executions (0.37%) for coordination were obtained. A MTBF of 2.97 h was obtained. The results show that the reliability of the EDEST device is acceptable when used under previously defined light conditions. These results along with previous work could demonstrate that the EDEST device can help surgeons during first training stages.

  2. Predicting laser weld reliability with stochastic reduced-order models. Predicting laser weld reliability

    DOE PAGES

    Emery, John M.; Field, Richard V.; Foulk, James W.; ...

    2015-05-26

    Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less

  3. The Reliability and Validity of the Computerized Double Inclinometer in Measuring Lumbar Mobility

    PubMed Central

    MacDermid, Joy Christine; Arumugam, Vanitha; Vincent, Joshua Israel; Carroll, Krista L

    2014-01-01

    Study Design : Repeated measures reliability/validity study. Objectives : To determine the concurrent validity, test-retest, inter-rater and intra-rater reliability of lumbar flexion and extension measurements using the Tracker M.E. computerized dual inclinometer (CDI) in comparison to the modified-modified Schober (MMS) Summary of Background : Numerous studies have evaluated the reliability and validity of the various methods of measuring spinal motion, but the results are inconsistent. Differences in equipment and techniques make it difficult to correlate results. Methods : Twenty subjects with back pain and twenty without back pain were selected through convenience sampling. Two examiners measured sagittal plane lumbar range of motion for each subject. Two separate tests with the CDI and one test with the MMS were conducted. Each test consisted of three trials. Instrument and examiner order was randomly assigned. Intra-class correlations (ICCs 2, 2 and 2, 2) and Pearson correlation coefficients (r) were used to calculate reliability and concurrent validity respectively. Results : Intra-trial reliability was high to very high for both the CDI (ICCs 0.85 - 0.96) and MMS (ICCs 0.84 - 0.98). However, the reliability was poor to moderate, when the CDI unit had to be repositioned either by the same rate (ICCs 0.16 - 0.59) or a different rater (ICCs 0.45 - 0.52). Inter-rater reliability for the MMS was moderate to high (ICCs 0.75 - 0.82) which bettered the moderate correlation obtained for the CDI (ICCs 0.45 - 0.52). Correlations between the CDI and MMS were poor for flexion (0.32; p<0.05) and poor to moderate (-0.42 - -0.51; p<0.05) for extension measurements. Conclusion : When using the CDI, an average of subsequent tests is required to obtain moderate reliability. The MMS was highly reliable than the CDI. The MMS and the CDI measure lumbar movement on a different metric that are not highly related to each other. PMID:25352928

  4. Inter- and intra-observer reliability of clinical movement-control tests for marines

    PubMed Central

    2012-01-01

    were inconsistent for lower-extremity pain. Conclusions Our results suggest that clinical tests of movement control of back and hip are reliable for use in screening protocols using several observers with marines. However, test-retest reproducibility was less accurate, which should be considered in follow-up evaluations. The results also indicate that combinations of low- and high-threshold tests have discriminative validity for prior back pain, but were inconclusive for lower-extremity pain. PMID:23273285

  5. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  6. Reliability and validity of gait analysis by android-based smartphone.

    PubMed

    Nishiguchi, Shu; Yamada, Minoru; Nagai, Koutatsu; Mori, Shuhei; Kajiwara, Yuu; Sonoda, Takuya; Yoshimura, Kazuya; Yoshitomi, Hiroyuki; Ito, Hiromu; Okamoto, Kazuya; Ito, Tatsuaki; Muto, Shinyo; Ishihara, Tatsuya; Aoyama, Tomoki

    2012-05-01

    Smartphones are very common devices in daily life that have a built-in tri-axial accelerometer. Similar to previously developed accelerometers, smartphones can be used to assess gait patterns. However, few gait analyses have been performed using smartphones, and their reliability and validity have not been evaluated yet. The purpose of this study was to evaluate the reliability and validity of a smartphone accelerometer. Thirty healthy young adults participated in this study. They walked 20 m at their preferred speeds, and their trunk accelerations were measured using a smartphone and a tri-axial accelerometer that was secured over the L3 spinous process. We developed a gait analysis application and installed it in the smartphone to measure the acceleration. After signal processing, we calculated the gait parameters of each measurement terminal: peak frequency (PF), root mean square (RMS), autocorrelation peak (AC), and coefficient of variance (CV) of the acceleration peak intervals. Remarkable consistency was observed in the test-retest reliability of all the gait parameter results obtained by the smartphone (p<0.001). All the gait parameter results obtained by the smartphone showed statistically significant and considerable correlations with the same parameter results obtained by the tri-axial accelerometer (PF r=0.99, RMS r=0.89, AC r=0.85, CV r=0.82; p<0.01). Our study indicates that the smartphone with gait analysis application used in this study has the capacity to quantify gait parameters with a degree of accuracy that is comparable to that of the tri-axial accelerometer.

  7. Accurate phylogenetic classification of DNA fragments based onsequence composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequencemore » characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.« less

  8. Design for reliability: NASA reliability preferred practices for design and test

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  9. The determination of accurate dipole polarizabilities alpha and gamma for the noble gases

    NASA Technical Reports Server (NTRS)

    Rice, Julia E.; Taylor, Peter R.; Lee, Timothy J.; Almlof, Jan

    1991-01-01

    Accurate static dipole polarizabilities alpha and gamma of the noble gases He through Xe were determined using wave functions of similar quality for each system. Good agreement with experimental data for the static polarizability gamma was obtained for Ne and Xe, but not for Ar and Kr. Calculations suggest that the experimental values for these latter ions are too low.

  10. Rollover risk prediction of heavy vehicles by reliability index and empirical modelling

    NASA Astrophysics Data System (ADS)

    Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles

    2018-03-01

    This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.

  11. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  12. A Pilot Study Examining the Test-Retest and Internal Consistency Reliability of the ABLLS-R

    ERIC Educational Resources Information Center

    Partington, James W.; Bailey, Autumn; Partington, Scott W.

    2018-01-01

    The literature contains a variety of assessment tools for measuring the skills of individuals with autism or other developmental delays, but most lack adequate empirical evidence supporting their reliability and validity. The current pilot study sought to examine the reliability of scores obtained from the Assessment of Basic Language and Learning…

  13. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  14. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  15. Reliable and fast quantitative analysis of active ingredient in pharmaceutical suspension using Raman spectroscopy.

    PubMed

    Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S

    2007-06-12

    The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.

  16. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  17. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  18. Accurate age estimation in small-scale societies

    PubMed Central

    Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.

    2017-01-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282

  19. Accurate Emission Line Diagnostics at High Redshift

    NASA Astrophysics Data System (ADS)

    Jones, Tucker

    2017-08-01

    How do the physical conditions of high redshift galaxies differ from those seen locally? Spectroscopic surveys have invested hundreds of nights of 8- and 10-meter telescope time as well as hundreds of Hubble orbits to study evolution in the galaxy population at redshifts z 0.5-4 using rest-frame optical strong emission line diagnostics. These surveys reveal evolution in the gas excitation with redshift but the physical cause is not yet understood. Consequently there are large systematic errors in derived quantities such as metallicity.We have used direct measurements of gas density, temperature, and metallicity in a unique sample at z=0.8 to determine reliable diagnostics for high redshift galaxies. Our measurements suggest that offsets in emission line ratios at high redshift are primarily caused by high N/O abundance ratios. However, our ground-based data cannot rule out other interpretations. Spatially resolved Hubble grism spectra are needed to distinguish between the remaining plausible causes such as active nuclei, shocks, diffuse ionized gas emission, and HII regions with escaping ionizing flux. Identifying the physical origin of evolving excitation will allow us to build the necessary foundation for accurate measurements of metallicity and other properties of high redshift galaxies. Only then can we expoit the wealth of data from current surveys and near-future JWST spectroscopy to understand how galaxies evolve over time.

  20. Accurate age estimation in small-scale societies.

    PubMed

    Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G

    2017-08-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.