Sample records for accuracy studies tool

  1. Quality Assessment of Comparative Diagnostic Accuracy Studies: Our Experience Using a Modified Version of the QUADAS-2 Tool

    ERIC Educational Resources Information Center

    Wade, Ros; Corbett, Mark; Eastwood, Alison

    2013-01-01

    Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…

  2. [Risk on bias assessment: (6) A Revised Tool for the Quality Assessment on Diagnostic Accuracy Studies (QUADAS-2)].

    PubMed

    Qu, Y J; Yang, Z R; Sun, F; Zhan, S Y

    2018-04-10

    This paper introduced the Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2), including the development and comparison with the original QUADAS, and illustrated the application of QUADAS-2 in a published paper related to the study on diagnostic accuracy which was included in systematic review and Meta-analysis. QUADAS-2 presented considerable improvement over the original tool. Confused items that included in QUADAS had disappeared and the quality assessment of the original study replaced by the rating of risk on bias and applicability. This was implemented through the description on the four main domains with minimal overlapping and answering the signal questions in each domain. The risk of bias and applicability with 'high','low' or 'unclear' was in line with the risk of bias assessment of intervention studies in Cochrane, so to replace the total score of quality assessment in QUADAS. Meanwhile, QUADAS-2 was also applicable to assess the diagnostic accuracy studies in which follow-up without prognosis was involved in golden standard. It was useful to assess the overall methodological quality of the study despite more time consuming than the original QUADAS. However, QUADAS-2 needs to be modified to apply in comparative studies on diagnostic accuracy and we hope the users would follow the updates and give their feedbacks on line.

  3. Diagnostic Accuracy of Fall Risk Assessment Tools in People With Diabetic Peripheral Neuropathy

    PubMed Central

    Pohl, Patricia S.; Mahnken, Jonathan D.; Kluding, Patricia M.

    2012-01-01

    Background Diabetic peripheral neuropathy affects nearly half of individuals with diabetes and leads to increased fall risk. Evidence addressing fall risk assessment for these individuals is lacking. Objective The purpose of this study was to identify which of 4 functional mobility fall risk assessment tools best discriminates, in people with diabetic peripheral neuropathy, between recurrent “fallers” and those who are not recurrent fallers. Design A cross-sectional study was conducted. Setting The study was conducted in a medical research university setting. Participants The participants were a convenience sample of 36 individuals between 40 and 65 years of age with diabetic peripheral neuropathy. Measurements Fall history was assessed retrospectively and was the criterion standard. Fall risk was assessed using the Functional Reach Test, the Timed “Up & Go” Test, the Berg Balance Scale, and the Dynamic Gait Index. Sensitivity, specificity, positive and negative likelihood ratios, and overall diagnostic accuracy were calculated for each fall risk assessment tool. Receiver operating characteristic curves were used to estimate modified cutoff scores for each fall risk assessment tool; indexes then were recalculated. Results Ten of the 36 participants were classified as recurrent fallers. When traditional cutoff scores were used, the Dynamic Gait Index and Functional Reach Test demonstrated the highest sensitivity at only 30%; the Dynamic Gait Index also demonstrated the highest overall diagnostic accuracy. When modified cutoff scores were used, all tools demonstrated improved sensitivity (80% or 90%). Overall diagnostic accuracy improved for all tests except the Functional Reach Test; the Timed “Up & Go” Test demonstrated the highest diagnostic accuracy at 88.9%. Limitations The small sample size and retrospective fall history assessment were limitations of the study. Conclusions Modified cutoff scores improved diagnostic accuracy for 3 of 4 fall risk

  4. Testing a tool for the classification of study designs in systematic reviews of interventions and exposures showed moderate reliability and low accuracy.

    PubMed

    Hartling, Lisa; Bond, Kenneth; Santaguida, P Lina; Viswanathan, Meera; Dryden, Donna M

    2011-08-01

    To develop and test a study design classification tool. We contacted relevant organizations and individuals to identify tools used to classify study designs and ranked these using predefined criteria. The highest ranked tool was a design algorithm developed, but no longer advocated, by the Cochrane Non-Randomized Studies Methods Group; this was modified to include additional study designs and decision points. We developed a reference classification for 30 studies; 6 testers applied the tool to these studies. Interrater reliability (Fleiss' κ) and accuracy against the reference classification were assessed. The tool was further revised and retested. Initial reliability was fair among the testers (κ=0.26) and the reference standard raters κ=0.33). Testing after revisions showed improved reliability (κ=0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study design was experimental (5 of 15 studies), and whether there was a comparison of any kind (4 of 15 studies). Agreement was higher among testers who had completed graduate level training versus those who had not. The moderate reliability and low accuracy may be because of lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in tester characteristics. The results may not be generalizable to all published studies, as the test studies were selected because they had posed challenges for previous reviewers with respect to their design classification. Application of such a tool should be accompanied by training, pilot testing, and context-specific decision rules. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Accuracy of Nutritional Screening Tools in Assessing the Risk of Undernutrition in Hospitalized Children.

    PubMed

    Huysentruyt, Koen; Devreker, Thierry; Dejonckheere, Joachim; De Schepper, Jean; Vandenplas, Yvan; Cools, Filip

    2015-08-01

    The aim of the present study was to evaluate the predictive accuracy of screening tools for assessing nutritional risk in hospitalized children in developed countries. The study involved a systematic review of literature (MEDLINE, EMBASE, and Cochrane Central databases up to January 17, 2014) of studies on the diagnostic performance of pediatric nutritional screening tools. Methodological quality was assessed using a modified QUADAS tool. Sensitivity and specificity were calculated for each screening tool per validation method. A meta-analysis was performed to estimate the risk ratio of different screening result categories of being truly at nutritional risk. A total of 11 studies were included on ≥1 of the following screening tools: Pediatric Nutritional Risk Score, Screening Tool for the Assessment of Malnutrition in Paediatrics, Paediatric Yorkhill Malnutrition Score, and Screening Tool for Risk on Nutritional Status and Growth. Because of variation in reference standards, a direct comparison of the predictive accuracy of the screening tools was not possible. A meta-analysis was performed on 1629 children from 7 different studies. The risk ratio of being truly at nutritional risk was 0.349 (95% confidence interval [CI] 0.16-0.78) for children in the low versus moderate screening category and 0.292 (95% CI 0.19-0.44) in the moderate versus high screening category. There is insufficient evidence to choose 1 nutritional screening tool over another based on their predictive accuracy. The estimated risk of being at "true nutritional risk" increases with each category of screening test result. Each screening category should be linked to a specific course of action, although further research is needed.

  6. Diagnostic test accuracy of nutritional tools used to identify undernutrition in patients with colorectal cancer: a systematic review.

    PubMed

    Håkonsen, Sasja Jul; Pedersen, Preben Ulrich; Bath-Hextall, Fiona; Kirkpatrick, Pamela

    2015-05-15

    Effective nutritional screening, nutritional care planning and nutritional support are essential in all settings, and there is no doubt that a health service seeking to increase safety and clinical effectiveness must take nutritional care seriously. Screening and early detection of malnutrition is crucial in identifying patients at nutritional risk. There is a high prevalence of malnutrition in hospitalized patients undergoing treatment for colorectal cancer. To synthesize the best available evidence regarding the diagnostic test accuracy of nutritional tools (sensitivity and specificity) used to identify malnutrition (specifically undernutrition) in patients with colorectal cancer (such as the Malnutrition Screening Tool and Nutritional Risk Index) compared to reference tests (such as the Subjective Global Assessment or Patient Generated Subjective Global Assessment). Patients with colorectal cancer requiring either (or all) surgery, chemotherapy and/or radiotherapy in secondary care. Focus of the review: The diagnostic test accuracy of validated assessment tools/instruments (such as the Malnutrition Screening Tool and Nutritional Risk Index) in the diagnosis of malnutrition (specifically under-nutrition) in patients with colorectal cancer, relative to reference tests (Subjective Global Assessment or Patient Generated Subjective Global Assessment). Types of studies: Diagnostic test accuracy studies regardless of study design. Studies published in English, German, Danish, Swedish and Norwegian were considered for inclusion in this review. Databases were searched from their inception to April 2014. Methodological quality was determined using the Quality Assessment of Diagnostic Accuracy Studies checklist. Data was collected using the data extraction form: the Standards for Reporting Studies of Diagnostic Accuracy checklist for the reporting of studies of diagnostic accuracy. The accuracy of diagnostic tests is presented in terms of sensitivity, specificity, positive

  7. Evaluation of accuracy of IHI Trigger Tool in identifying adverse drug events: a prospective observational study.

    PubMed

    das Dores Graciano Silva, Maria; Martins, Maria Auxiliadora Parreiras; de Gouvêa Viana, Luciana; Passaglia, Luiz Guilherme; de Menezes, Renata Rezende; de Queiroz Oliveira, João Antonio; da Silva, Jose Luiz Padilha; Ribeiro, Antonio Luiz Pinho

    2018-06-06

    Adverse drug events (ADEs) can seriously compromise the safety and quality of care provided to hospitalized patients, requiring the adoption of accurate methods to monitor them. We sought to prospectively evaluate the accuracy of the triggers proposed by the Institute for Healthcare Improvement (IHI) for identifying ADEs. A prospective study was conducted in a public university hospital, in 2015, with patients ≥18 years. Triggers proposed by IHI and clinical alterations suspected to be ADEs were searched daily. The number of days in which the patient was hospitalized was considered as unit of measure to evaluate the accuracy of each trigger. Three hundred patients were included in this study. Mean age was 56.3 years (standard deviation (SD) 16.0), and 154 (51.3%) were female. The frequency of patients with ADEs was 24.7% and with at least one trigger was 53.3%. From those patients who had at least one trigger, the most frequent triggers were antiemetics (57.5%) and "abrupt medication stop" (31.8%). Triggers' sensitivity ranged from 0.3 to11.8 % and the positive predictive value ranged from 1.2 to 27.3%. Specificity and negative predictive value were greater than 86%. Most patients identified by the presence of triggers did not have ADEs (64.4%). No triggers were identified in 40 (38.5%) ADEs. IHI Trigger Tool did not show good accuracy in detecting ADEs in this prospective study. The adoption of combined strategies could enhance effectiveness in identifying patient safety flaws. Further discussion might contribute to improve trigger usefulness in clinical practice. This article is protected by copyright. All rights reserved.

  8. A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.

    PubMed

    McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S

    2015-09-01

    Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).

  9. Evaluating radiographers' diagnostic accuracy in screen-reading mammograms: what constitutes a quality study?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au; Poulos, Ann E; Westmead Breast Cancer Institute, Westmead, New South Wales

    The aim of this study was to first evaluate the quality of studies investigating the diagnostic accuracy of radiographers as mammogram screen-readers and then to develop an adapted tool for determining the quality of screen-reading studies. A literature search was used to identify relevant studies and a quality evaluation tool constructed by combining the criteria for quality of Whiting, Rutjes, Dinnes et al. and Brealey and Westwood. This constructed tool was then applied to the studies and subsequently adapted specifically for use in evaluating quality in studies investigating diagnostic accuracy of screen-readers. Eleven studies were identified and the constructed toolmore » applied to evaluate quality. This evaluation resulted in the identification of quality issues with the studies such as potential for bias, applicability of results, study conduct, reporting of the study and observer characteristics. An assessment of the applicability and relevance of the tool for this area of research resulted in adaptations to the criteria and the development of a tool specifically for evaluating diagnostic accuracy in screen-reading. This tool, with further refinement and rigorous validation can make a significant contribution to promoting well-designed studies in this important area of research and practice.« less

  10. Reporting completeness and transparency of meta-analyses of depression screening tool accuracy: A comparison of meta-analyses published before and after the PRISMA statement.

    PubMed

    Rice, Danielle B; Kloda, Lorie A; Shrier, Ian; Thombs, Brett D

    2016-08-01

    Meta-analyses that are conducted rigorously and reported completely and transparently can provide accurate evidence to inform the best possible healthcare decisions. Guideline makers have raised concerns about the utility of existing evidence on the diagnostic accuracy of depression screening tools. The objective of our study was to evaluate the transparency and completeness of reporting in meta-analyses of the diagnostic accuracy of depression screening tools using the PRISMA tool adapted for diagnostic test accuracy meta-analyses. We searched MEDLINE and PsycINFO from January 1, 2005 through March 13, 2016 for recent meta-analyses in any language on the diagnostic accuracy of depression screening tools. Two reviewers independently assessed the transparency in reporting using the PRISMA tool with appropriate adaptations made for studies of diagnostic test accuracy. We identified 21 eligible meta-analyses. Twelve of 21 meta-analyses complied with at least 50% of adapted PRISMA items. Of 30 adapted PRISMA items, 11 were fulfilled by ≥80% of included meta-analyses, 3 by 50-79% of meta-analyses, 7 by 25-45% of meta-analyses, and 9 by <25%. On average, post-PRISMA meta-analyses complied with 17 of 30 items compared to 13 of 30 items pre-PRISMA. Deficiencies in the transparency of reporting in meta-analyses of the diagnostic test accuracy of depression screening tools of meta-analyses were identified. Authors, reviewers, and editors should adhere to the PRISMA statement to improve the reporting of meta-analyses of the diagnostic accuracy of depression screening tools. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Patient-Reported Outcomes After Radiation Therapy in Men With Prostate Cancer: A Systematic Review of Prognostic Tool Accuracy and Validity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer

    Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less

  12. PREDICT: a diagnostic accuracy study of a tool for predicting mortality within one year: who should have an advance healthcare directive?

    PubMed

    Richardson, Philip; Greenslade, Jaimi; Shanmugathasan, Sulochana; Doucet, Katherine; Widdicombe, Neil; Chu, Kevin; Brown, Anthony

    2015-01-01

    CARING is a screening tool developed to identify patients who have a high likelihood of death in 1 year. This study sought to validate a modified CARING tool (termed PREDICT) using a population of patients presenting to the Emergency Department. In total, 1000 patients aged over 55 years who were admitted to hospital via the Emergency Department between January and June 2009 were eligible for inclusion in this study. Data on the six prognostic indicators comprising PREDICT were obtained retrospectively from patient records. One-year mortality data were obtained from the State Death Registry. Weights were applied to each PREDICT criterion, and its final score ranged from 0 to 44. Receiver operator characteristic analyses and diagnostic accuracy statistics were used to assess the accuracy of PREDICT in identifying 1-year mortality. The sample comprised 976 patients with a median (interquartile range) age of 71 years (62-81 years) and a 1-year mortality of 23.4%. In total, 50% had ≥1 PREDICT criteria with a 1-year mortality of 40.4%. Receiver operator characteristic analysis gave an area under the curve of 0.86 (95% confidence interval: 0.83-0.89). Using a cut-off of 13 points, PREDICT had a 95.3% (95% confidence interval: 93.6-96.6) specificity and 53.9% (95% confidence interval: 47.5-60.3) sensitivity for predicting 1-year mortality. PREDICT was simpler than the CARING criteria and identified 158 patients per 1000 admitted who could benefit from advance care planning. PREDICT was successfully applied to the Australian healthcare system with findings similar to the original CARING study conducted in the United States. This tool could improve end-of-life care by identifying who should have advance care planning or an advance healthcare directive. © The Author(s) 2014.

  13. Molecular Tools for Diagnosis of Visceral Leishmaniasis: Systematic Review and Meta-Analysis of Diagnostic Test Accuracy

    PubMed Central

    de Ruiter, C. M.; van der Veer, C.; Leeflang, M. M. G.; Deborggraeve, S.; Lucas, C.

    2014-01-01

    Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. PMID:24829226

  14. Evaluation of Pictorial Dietary Assessment Tool for Hospitalized Patients with Diabetes: Cost, Accuracy, and User Satisfaction Analysis

    PubMed Central

    Shahar, Suzana; Abdul Manaf, Zahara; Mohd Nordin, Nor Azlin; Susetyowati, Susetyowati

    2017-01-01

    Although nutritional screening and dietary monitoring in clinical settings are important, studies on related user satisfaction and cost benefit are still lacking. This study aimed to: (1) elucidate the cost of implementing a newly developed dietary monitoring tool, the Pictorial Dietary Assessment Tool (PDAT); and (2) investigate the accuracy of estimation and satisfaction of healthcare staff after the use of the PDAT. A cross-over intervention study was conducted among 132 hospitalized patients with diabetes. Cost and time for the implementation of PDAT in comparison to modified Comstock was estimated using the activity-based costing approach. Accuracy was expressed as the percentages of energy and protein obtained by both methods, which were within 15% and 30%, respectively, of those obtained by the food weighing. Satisfaction of healthcare staff was measured using a standardized questionnaire. Time to complete the food intake recording of patients using PDAT (2.31 ± 0.70 min) was shorter than when modified Comstock (3.53 ± 1.27 min) was used (p < 0.001). Overall cost per patient was slightly higher for PDAT (United States Dollar 0.27 ± 0.02) than for modified Comstock (USD 0.26 ± 0.04 (p < 0.05)). The accuracy of energy intake estimated by modified Comstock was 10% lower than that of PDAT. There was poorer accuracy of protein intake estimated by modified Comstock (<40%) compared to that estimated by the PDAT (>71%) (p < 0.05). Mean user satisfaction of healthcare staff was significantly higher for PDAT than that for modified Comstock (p < 0.05). PDAT requires a shorter time to be completed and was rated better than modified Comstock. PMID:29283401

  15. Molecular tools for diagnosis of visceral leishmaniasis: systematic review and meta-analysis of diagnostic test accuracy.

    PubMed

    de Ruiter, C M; van der Veer, C; Leeflang, M M G; Deborggraeve, S; Lucas, C; Adams, E R

    2014-09-01

    Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  16. Accuracy of Brief Screening Tools for Identifying Postpartum Depression Among Adolescent Mothers

    PubMed Central

    Venkatesh, Kartik K.; Zlotnick, Caron; Triche, Elizabeth W.; Ware, Crystal

    2014-01-01

    OBJECTIVE: To evaluate the accuracy of the Edinburgh Postnatal Depression Scale (EPDS) and 3 subscales for identifying postpartum depression among primiparous adolescent mothers. METHODS: Mothers enrolled in a randomized controlled trial to prevent postpartum depression completed a psychiatric diagnostic interview and the 10-item EPDS at 6 weeks, 3 months, and 6 months postpartum. Three subscales of the EPDS were assessed as brief screening tools: 3-item anxiety subscale (EPDS-3), 7-item depressive symptoms subscale (EPDS-7), and 2-item subscale (EPDS-2) that resemble the Patient Health Questionnaire-2. Receiver operating characteristic curves and the areas under the curves for each tool were compared to assess accuracy. The sensitivities and specificities of each screening tool were calculated in comparison with diagnostic criteria for a major depressive disorder. Repeated-measures longitudinal analytical techniques were used. RESULTS: A total of 106 women contributed 289 postpartum visits; 18% of the women met criteria for incident postpartum depression by psychiatric diagnostic interview. When used as continuous measures, the full EPDS, EPDS-7, and EPDS-2 performed equally well (area under the curve >0.9). Optimal cutoff scores for a positive depression screen for the EPDS and EPDS-7 were lower (≥9 and ≥7, respectively) than currently recommended cutoff scores (≥10). At optimal cutoff scores, the EPDS and EPDS-7 both had sensitivities of 90% and specificities of >85%. CONCLUSIONS: The EPDS, EPDS-7, and EPDS-2 are highly accurate at identifying postpartum depression among adolescent mothers. In primary care pediatric settings, the EPDS and its shorter subscales have potential for use as effective depression screening tools. PMID:24344102

  17. Diagnostic accuracy of an identification tool for localized neuropathic pain based on the IASP criteria.

    PubMed

    Mayoral, Víctor; Pérez-Hernández, Concepción; Muro, Inmaculada; Leal, Ana; Villoria, Jesús; Esquivias, Ana

    2018-04-27

    Based on the clear neuroanatomical delineation of many neuropathic pain (NP) symptoms, a simple tool for performing a short structured clinical encounter based on the IASP diagnostic criteria was developed to identify NP. This study evaluated its accuracy and usefulness. A case-control study was performed in 19 pain clinics within Spain. A pain clinician used the experimental screening tool (the index test, IT) to assign the descriptions of non-neuropathic (nNP), non-localized neuropathic (nLNP), and localized neuropathic (LNP) to the patients' pain conditions. The reference standard was a formal clinical diagnosis provided by another pain clinician. The accuracy of the IT was compared with that of the Douleur Neuropathique en 4 questions (DN4) and the Leeds Assessment of Neuropathic Signs and Symptoms (LANSS). Six-hundred and sixty-six patients were analyzed. There was a good agreement between the IT and the reference standard (kappa =0.722). The IT was accurate in distinguishing between LNP and nLNP (83.2% sensitivity, 88.2% specificity), between LNP and the other pain categories (nLNP + nNP) (80.0% sensitivity, 90.7% specificity), and between NP and nNP (95.5% sensitivity, 89.1% specificity). The accuracy in distinguishing between NP and nNP was comparable with that of the DN4 and the LANSS. The IT took a median of 10 min to complete. A novel instrument based on an operationalization of the IASP criteria can not only discern between LNP and nLNP, but also provide a high level of diagnostic certainty about the presence of NP after a short clinical encounter.

  18. Wind Prediction Accuracy for Air Traffic Management Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan

    2000-01-01

    The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an

  19. Continuous Glucose Monitoring and Trend Accuracy

    PubMed Central

    Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey

    2014-01-01

    Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437

  20. Reducing waste in evaluation studies on fall risk assessment tools for older people.

    PubMed

    Meyer, Gabriele; Möhler, Ralph; Köpke, Sascha

    2018-05-18

    To critically appraise the recognition of methodological challenges in evaluation studies on assessment tools and nurses' clinical judgement on fall risk in older people and suggest how to reduce respective research waste. Opinion paper and narrative review covering systematic reviews on studies assessing diagnostic accuracy and impact of assessment tools and/or nurses' clinical judgement. Eighteen reviews published in the last 15 years were analysed. Only one reflects potentially important factors threatening the accuracy of assessments using delayed verification with fall events as reference after a certain period of time, i.e. natural course, preventive measures and treatment paradox where accurate assessment leads to prevention of falls, i.e. influencing the reference standard and falsely indicating low diagnostic accuracy. Also, only one review mentions randomised controlled trials as appropriate study design for the investigation of the impact of fall risk assessment tools on patient-important outcomes. Until now, only one randomised controlled trial dealing with this question has been performed showing no effect on falls and injuries. Instead of investigating the diagnostic accuracy of fall assessment tools, the focus of future research should be on the effectiveness of the implementation of fall assessment tools at reducing falls and injuries. Copyright © 2018. Published by Elsevier Inc.

  1. a Free and Open Source Tool to Assess the Accuracy of Land Cover Maps: Implementation and Application to Lombardy Region (italy)

    NASA Astrophysics Data System (ADS)

    Bratic, G.; Brovelli, M. A.; Molinari, M. E.

    2018-04-01

    The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.

  2. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  3. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  4. Assessment of the predictive accuracy of five in silico prediction tools, alone or in combination, and two metaservers to classify long QT syndrome gene mutations.

    PubMed

    Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R

    2015-05-13

    Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico

  5. Accuracy and Acceptability of a Screening Tool for Identifying Intimate Partner Violence Perpetration among Women Veterans: A Pre-Implementation Evaluation.

    PubMed

    Portnoy, Galina A; Haskell, Sally G; King, Matthew W; Maskin, Rachel; Gerber, Megan R; Iverson, Katherine M

    2018-06-06

    Veterans are at heightened risk for perpetrating intimate partner violence (IPV), yet there is limited evidence to inform practice and policy for the detection of IPV perpetration. The present study evaluated the accuracy and acceptability of a potential IPV perpetration screening tool for use with women veterans. A national sample of women veterans completed a 2016 web-based survey that included a modified 5-item Extended-Hurt/Insult/Threaten/Scream (Modified E-HITS) and the Revised Conflict Tactics Scales (CTS-2). Items also assessed women's perceptions of the acceptability and appropriateness of the modified E-HITS questions for use in healthcare settings. Accuracy statistics, including sensitivity and specificity, were calculated using the CTS-2 as the reference standard. Primary measures included the Modified E-HITS (index test), CTS-2 (reference standard), and items assessing acceptability. This study included 187 women, of whom 31 women veterans (16.6%) reported past-6-month IPV perpetration on the CTS-2. The Modified E-HITS demonstrated good overall accuracy (area under the curve, 0.86; 95% confidence interval, 0.78-0.94). In addition, the majority of women perceived the questions to be acceptable and appropriate. Findings demonstrate that the Modified E-HITS is promising as a low-burden tool for detecting of IPV perpetration among women veterans. This tool may help the Veterans Health Administration and other health care providers detect IPV perpetration and offer appropriate referrals for comprehensive assessment and services. Published by Elsevier Inc.

  6. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity.

    PubMed

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc

    2013-06-01

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.

  7. Precision, accuracy, and efficiency of four tools for measuring soil bulk density or strength.

    Treesearch

    Richard E. Miller; John Hazard; Steven Howes

    2001-01-01

    Monitoring soil compaction is time consuming. A desire for speed and lower costs, however, must be balanced with the appropriate precision and accuracy required of the monitoring task. We compared three core samplers and a cone penetrometer for measuring soil compaction after clearcut harvest on a stone-free and a stony soil. Precision (i.e., consistency) of each tool...

  8. Ambulance smartphone tool for field triage of ruptured aortic aneurysms (FILTR): study protocol for a prospective observational validation of diagnostic accuracy.

    PubMed

    Lewis, Thomas L; Fothergill, Rachael T; Karthikesalingam, Alan

    2016-10-24

    Rupture of an abdominal aortic aneurysm (rAAA) carries a considerable mortality rate and is often fatal. rAAA can be treated through open or endovascular surgical intervention and it is possible that more rapid access to definitive intervention might be a key aspect of improving mortality for rAAA. Diagnosis is not always straightforward with up to 42% of rAAA initially misdiagnosed, introducing potentially harmful delay. There is a need for an effective clinical decision support tool for accurate prehospital diagnosis and triage to enable transfer to an appropriate centre. Prospective multicentre observational study assessing the diagnostic accuracy of a prehospital smartphone triage tool for detection of rAAA. The study will be conducted across London in conjunction with London Ambulance Service (LAS). A logistic score predicting the risk of rAAA by assessing ten key parameters was developed and retrospectively validated through logistic regression analysis of ambulance records and Hospital Episode Statistics data for 2200 patients from 2005 to 2010. The triage tool is integrated into a secure mobile app for major smartphone platforms. Key parameters collected from the app will be retrospectively matched with final hospital discharge diagnosis for each patient encounter. The primary outcome is to assess the sensitivity, specificity and positive predictive value of the rAAA triage tool logistic score in prospective use as a mob app for prehospital ambulance clinicians. Data collection started in November 2014 and the study will recruit a minimum of 1150 non-consecutive patients over a time period of 2 years. Full ethical approval has been gained for this study. The results of this study will be disseminated in peer-reviewed publications, and international/national presentations. CPMS 16459; pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Dynamics of Complexity and Accuracy: A Longitudinal Case Study of Advanced Untutored Development

    ERIC Educational Resources Information Center

    Polat, Brittany; Kim, Youjin

    2014-01-01

    This longitudinal case study follows a dynamic systems approach to investigate an under-studied research area in second language acquisition, the development of complexity and accuracy for an advanced untutored learner of English. Using the analytical tools of dynamic systems theory (Verspoor et al. 2011) within the framework of complexity,…

  10. Demonstrating High-Accuracy Orbital Access Using Open-Source Tools

    NASA Technical Reports Server (NTRS)

    Gilbertson, Christian; Welch, Bryan

    2017-01-01

    Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.

  11. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  12. The development of a quality appraisal tool for studies of diagnostic reliability (QAREL).

    PubMed

    Lucas, Nicholas P; Macaskill, Petra; Irwig, Les; Bogduk, Nikolai

    2010-08-01

    In systematic reviews of the reliability of diagnostic tests, no quality assessment tool has been used consistently. The aim of this study was to develop a specific quality appraisal tool for studies of diagnostic reliability. Key principles for the quality of studies of diagnostic reliability were identified with reference to epidemiologic principles, existing quality appraisal checklists, and the Standards for Reporting of Diagnostic Accuracy (STARD) and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) resources. Specific items that encompassed each of the principles were developed. Experts in diagnostic research provided feedback on the items that were to form the appraisal tool. This process was iterative and continued until consensus among experts was reached. The Quality Appraisal of Reliability Studies (QAREL) checklist includes 11 items that explore seven principles. Items cover the spectrum of subjects, spectrum of examiners, examiner blinding, order effects of examination, suitability of the time interval among repeated measurements, appropriate test application and interpretation, and appropriate statistical analysis. QAREL has been developed as a specific quality appraisal tool for studies of diagnostic reliability. The reliability of this tool in different contexts needs to be evaluated. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  13. Technology of machine tools. Volume 5. Machine tool accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hocken, R.J.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  14. New bone post-processing tools in forensic imaging: a multi-reader feasibility study to evaluate detection time and diagnostic accuracy in rib fracture assessment.

    PubMed

    Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David

    2017-03-01

    The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.

  15. Quality and reporting of diagnostic accuracy studies in TB, HIV and malaria: evaluation using QUADAS and STARD standards.

    PubMed

    Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar

    2009-11-13

    Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004-2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required authors to use STARD. Recently

  16. Quality and Reporting of Diagnostic Accuracy Studies in TB, HIV and Malaria: Evaluation Using QUADAS and STARD Standards

    PubMed Central

    Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar

    2009-01-01

    Background Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. Methods We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004–2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Findings Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required

  17. Serial combination of non-invasive tools improves the diagnostic accuracy of severe liver fibrosis in patients with NAFLD.

    PubMed

    Petta, S; Wong, V W-S; Cammà, C; Hiriart, J-B; Wong, G L-H; Vergniol, J; Chan, A W-H; Di Marco, V; Merrouche, W; Chan, H L-Y; Marra, F; Le-Bail, B; Arena, U; Craxì, A; de Ledinghen, V

    2017-09-01

    The accuracy of available non-invasive tools for staging severe fibrosis in patients with nonalcoholic fatty liver disease (NAFLD) is still limited. To assess the diagnostic performance of paired or serial combination of non-invasive tools in NAFLD patients. We analysed data from 741 patients with a histological diagnosis of NAFLD. The GGT/PLT, APRI, AST/ALT, BARD, FIB-4, and NAFLD Fibrosis Score (NFS) scores were calculated according to published algorithms. Liver stiffness measurement (LSM) was performed by FibroScan. LSM, NFS and FIB-4 were the best non-invasive tools for staging F3-F4 fibrosis (AUC 0.863, 0.774, and 0.792, respectively), with LSM having the highest sensitivity (90%), and the highest NPV (94%), and NFS and FIB-4 the highest specificity (97% and 93%, respectively), and the highest PPV (73% and 79%, respectively). The paired combination of LSM or NFS with FIB-4 strongly reduced the likelihood of wrongly classified patients (ranging from 2.7% to 2.6%), at the price of a high uncertainty area (ranging from 54.1% to 58.2%), and of a low overall accuracy (ranging from 43% to 39.1%). The serial combination with the second test used in patients in the grey area of the first test and in those with high LSM values (>9.6 KPa) or low NFS or FIB-4 values (<-1.455 and <1.30, respectively) overall increased the diagnostic performance generating an accuracy ranging from 69.8% to 70.1%, an uncertainty area ranging from 18.9% to 20.4% and a rate of wrong classification ranging from 9.2% to 11.3%. The serial combination of LSM with FIB-4/NFS has a good diagnostic accuracy for the non-invasive diagnosis of severe fibrosis in NAFLD. © 2017 John Wiley & Sons Ltd.

  18. Diagnostic accuracy research in glaucoma is still incompletely reported: An application of Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015.

    PubMed

    Michelessi, Manuele; Lucenteforte, Ersilia; Miele, Alba; Oddone, Francesco; Crescioli, Giada; Fameli, Valeria; Korevaar, Daniël A; Virgili, Gianni

    2017-01-01

    Research has shown a modest adherence of diagnostic test accuracy (DTA) studies in glaucoma to the Standards for Reporting of Diagnostic Accuracy Studies (STARD). We have applied the updated 30-item STARD 2015 checklist to a set of studies included in a Cochrane DTA systematic review of imaging tools for diagnosing manifest glaucoma. Three pairs of reviewers, including one senior reviewer who assessed all studies, independently checked the adherence of each study to STARD 2015. Adherence was analyzed on an individual-item basis. Logistic regression was used to evaluate the effect of publication year and impact factor on adherence. We included 106 DTA studies, published between 2003-2014 in journals with a median impact factor of 2.6. Overall adherence was 54.1% for 3,286 individual rating across 31 items, with a mean of 16.8 (SD: 3.1; range 8-23) items per study. Large variability in adherence to reporting standards was detected across individual STARD 2015 items, ranging from 0 to 100%. Nine items (1: identification as diagnostic accuracy study in title/abstract; 6: eligibility criteria; 10: index test (a) and reference standard (b) definition; 12: cut-off definitions for index test (a) and reference standard (b); 14: estimation of diagnostic accuracy measures; 21a: severity spectrum of diseased; 23: cross-tabulation of the index and reference standard results) were adequately reported in more than 90% of the studies. Conversely, 10 items (3: scientific and clinical background of the index test; 11: rationale for the reference standard; 13b: blinding of index test results; 17: analyses of variability; 18; sample size calculation; 19: study flow diagram; 20: baseline characteristics of participants; 28: registration number and registry; 29: availability of study protocol; 30: sources of funding) were adequately reported in less than 30% of the studies. Only four items showed a statistically significant improvement over time: missing data (16), baseline

  19. Accuracy of digital images in the detection of marginal microleakage: an in vitro study.

    PubMed

    Alvarenga, Fábio Augusto; Andrade, Marcelo Ferrarezi; Pinelli, Camila; Rastelli, Alessanda Nara; Victorino, Keli Regina; Loffredo, Leonor de

    2012-08-01

    To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study. Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI). The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99). Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.

  20. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    PubMed

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Development of a novel empathy-related video-feedback intervention to improve empathic accuracy of nursing students: A pilot study.

    PubMed

    Lobchuk, Michelle; Halas, Gayle; West, Christina; Harder, Nicole; Tursunova, Zulfiya; Ramraj, Chantal

    2016-11-01

    Stressed family carers engage in health-risk behaviours that can lead to chronic illness. Innovative strategies are required to bolster empathic dialogue skills that impact nursing student confidence and sensitivity in meeting carers' wellness needs. To report on the development and evaluation of a promising empathy-related video-feedback intervention and its impact on student empathic accuracy on carer health risk behaviours. A pilot quasi-experimental design study with eight pairs of 3rd year undergraduate nursing students and carers. Students participated in perspective-taking instructional and practice sessions, and a 10-minute video-recorded dialogue with carers followed by a video-tagging task. Quantitative and qualitative approaches helped us to evaluate the recruitment protocol, capture participant responses to the intervention and study tools, and develop a tool to assess student empathic accuracy. The instructional and practice sessions increased student self-awareness of biases and interest in learning empathy by video-tagging feedback. Carers felt that students were 'non-judgmental', inquisitive, and helped them to 'gain new insights' that fostered ownership to change their health-risk behaviour. There was substantial Fleiss Kappa agreement among four raters across five dyads and 67 tagged instances. In general, students and carers evaluated the intervention favourably. The results suggest areas of improvement to the recruitment protocol, perspective-taking instructions, video-tagging task, and empathic accuracy tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Methodology and reporting of diagnostic accuracy studies of automated perimetry in glaucoma: evaluation using a standardised approach.

    PubMed

    Fidalgo, Bruno M R; Crabb, David P; Lawrenson, John G

    2015-05-01

    To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  3. Screening for bipolar spectrum disorders: A comprehensive meta-analysis of accuracy studies.

    PubMed

    Carvalho, André F; Takwoingi, Yemisi; Sales, Paulo Marcelo G; Soczynska, Joanna K; Köhler, Cristiano A; Freitas, Thiago H; Quevedo, João; Hyphantis, Thomas N; McIntyre, Roger S; Vieta, Eduard

    2015-02-01

    Bipolar spectrum disorders are frequently under-recognized and/or misdiagnosed in various settings. Several influential publications recommend the routine screening of bipolar disorder. A systematic review and meta-analysis of accuracy studies for the bipolar spectrum diagnostic scale (BSDS), the hypomania checklist (HCL-32) and the mood disorder questionnaire (MDQ) were performed. The Pubmed, EMBASE, Cochrane, PsycINFO and SCOPUS databases were searched. Studies were included if the accuracy properties of the screening measures were determined against a DSM or ICD-10 structured diagnostic interview. The QUADAS-2 tool was used to rate bias. Fifty three original studies met inclusion criteria (N=21,542). At recommended cutoffs, summary sensitivities were 81%, 66% and 69%, while specificities were 67%, 79% and 86% for the HCL-32, MDQ, and BSDS in psychiatric services, respectively. The HCL-32 was more accurate than the MDQ for the detection of type II bipolar disorder in mental health care centers (P=0.018). At a cutoff of 7, the MDQ had a summary sensitivity of 43% and a summary specificity of 95% for detection of bipolar disorder in primary care or general population settings. Most studies were performed in mental health care settings. Several included studies had a high risk of bias. Although accuracy properties of the three screening instruments did not consistently differ in mental health care services, the HCL-32 was more accurate than the MDQ for the detection of type II BD. More studies in other settings (for example, in primary care) are necessary. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  5. Interactive visualisation for interpreting diagnostic test accuracy study results.

    PubMed

    Fanshawe, Thomas R; Power, Michael; Graziadio, Sara; Ordóñez-Mena, José M; Simpson, John; Allen, Joy

    2018-02-01

    Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared with decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting and for users of test evaluations to facilitate interpretation and application of the results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Classification accuracy for stratification with remotely sensed data

    Treesearch

    Raymond L. Czaplewski; Paul L. Patterson

    2003-01-01

    Tools are developed that help specify the classification accuracy required from remotely sensed data. These tools are applied during the planning stage of a sample survey that will use poststratification, prestratification with proportional allocation, or double sampling for stratification. Accuracy standards are developed in terms of an “error matrix,” which is...

  7. Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel

    NASA Astrophysics Data System (ADS)

    Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie

    2011-05-01

    Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.

  8. TAxonomy of Self-reported Sedentary behaviour Tools (TASST) framework for development, comparison and evaluation of self-report tools: content analysis and systematic review

    PubMed Central

    Dall, PM; Coulter, EH; Fitzsimons, CF; Skelton, DA; Chastin, SFM

    2017-01-01

    Objective Sedentary behaviour (SB) has distinct deleterious health outcomes, yet there is no consensus on best practice for measurement. This study aimed to identify the optimal self-report tool for population surveillance of SB, using a systematic framework. Design A framework, TAxonomy of Self-reported Sedentary behaviour Tools (TASST), consisting of four domains (type of assessment, recall period, temporal unit and assessment period), was developed based on a systematic inventory of existing tools. The inventory was achieved through a systematic review of studies reporting SB and tracing back to the original description. A systematic review of the accuracy and sensitivity to change of these tools was then mapped against TASST domains. Data sources Systematic searches were conducted via EBSCO, reference lists and expert opinion. Eligibility criteria for selecting studies The inventory included tools measuring SB in adults that could be self-completed at one sitting, and excluded tools measuring SB in specific populations or contexts. The systematic review included studies reporting on the accuracy against an objective measure of SB and/or sensitivity to change of a tool in the inventory. Results The systematic review initially identified 32 distinct tools (141 questions), which were used to develop the TASST framework. Twenty-two studies evaluated accuracy and/or sensitivity to change representing only eight taxa. Assessing SB as a sum of behaviours and using a previous day recall were the most promising features of existing tools. Accuracy was poor for all existing tools, with underestimation and overestimation of SB. There was a lack of evidence about sensitivity to change. Conclusions Despite the limited evidence, mapping existing SB tools onto the TASST framework has enabled informed recommendations to be made about the most promising features for a surveillance tool, identified aspects on which future research and development of SB surveillance tools should

  9. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  10. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  11. Evaluation of the accuracy of GPS as a method of locating traffic collisions.

    DOT National Transportation Integrated Search

    2004-06-01

    The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...

  12. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  13. TAxonomy of Self-reported Sedentary behaviour Tools (TASST) framework for development, comparison and evaluation of self-report tools: content analysis and systematic review.

    PubMed

    Dall, P M; Coulter, E H; Fitzsimons, C F; Skelton, D A; Chastin, Sfm

    2017-04-08

    Sedentary behaviour (SB) has distinct deleterious health outcomes, yet there is no consensus on best practice for measurement. This study aimed to identify the optimal self-report tool for population surveillance of SB, using a systematic framework. A framework, TAxonomy of Self-reported Sedentary behaviour Tools (TASST), consisting of four domains (type of assessment, recall period, temporal unit and assessment period), was developed based on a systematic inventory of existing tools. The inventory was achieved through a systematic review of studies reporting SB and tracing back to the original description. A systematic review of the accuracy and sensitivity to change of these tools was then mapped against TASST domains. Systematic searches were conducted via EBSCO, reference lists and expert opinion. The inventory included tools measuring SB in adults that could be self-completed at one sitting, and excluded tools measuring SB in specific populations or contexts. The systematic review included studies reporting on the accuracy against an objective measure of SB and/or sensitivity to change of a tool in the inventory. The systematic review initially identified 32 distinct tools (141 questions), which were used to develop the TASST framework. Twenty-two studies evaluated accuracy and/or sensitivity to change representing only eight taxa. Assessing SB as a sum of behaviours and using a previous day recall were the most promising features of existing tools. Accuracy was poor for all existing tools, with underestimation and overestimation of SB. There was a lack of evidence about sensitivity to change. Despite the limited evidence, mapping existing SB tools onto the TASST framework has enabled informed recommendations to be made about the most promising features for a surveillance tool, identified aspects on which future research and development of SB surveillance tools should focus. International prospective register of systematic reviews (PROPSPERO)/CRD42014009851

  14. Accuracy Evaluation of 19 Blood Glucose Monitoring Systems Manufactured in the Asia-Pacific Region: A Multicenter Study.

    PubMed

    Yu-Fei, Wang; Wei-Ping, Jia; Ming-Hsun, Wu; Miao-O, Chien; Ming-Chang, Hsieh; Chi-Pin, Wang; Ming-Shih, Lee

    2017-09-01

    System accuracy of current blood glucose monitors (BGMs) in the market has already been evaluated extensively, yet mostly focused on European and North American manufacturers. Data on BGMs manufactured in the Asia-Pacific region remain to be established. In this study, we sought to assess the accuracy performance of 19 BGMs manufactured in the Asia-pacific region. A total of 19 BGMs were obtained from local pharmacies in China. The study was conducted at three hospitals located in the Asia-Pacific region. Measurement results of each system were compared with results of the reference instrument (YSI 2300 PLUS Glucose Analyzer), and accuracy evaluation was performed in accordance to the ISO 15197:2003 and updated 2015 guidelines. Radar plots, which is a new method, are described herein to visualize the analytical performance of the 19 BGMs evaluated. Consensus error grid is a tool for evaluating the clinical significance of the results. The 19 BGMs resulted in a satisfaction rate between 83.5% and 100.0% within ISO 15197:2003 error limits, and between 71.3% and 100.0% within EN ISO 15197:2015 (ISO 15197:2013) error limits. Of the 19 BGMs evaluated, 12 met the minimal accuracy requirement of the ISO 15197:2003 standard, whereas only 4 met the tighter EN ISO 15197:2015 (ISO 15197:2013) requirements. Accuracy evaluation of BGMs should be performed regularly to maximize patient safety.

  15. Temporal bone borehole accuracy for cochlear implantation influenced by drilling strategy: an in vitro study.

    PubMed

    Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias

    2014-11-01

    Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.

  16. Air traffic control surveillance accuracy and update rate study

    NASA Technical Reports Server (NTRS)

    Craigie, J. H.; Morrison, D. D.; Zipper, I.

    1973-01-01

    The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.

  17. The Science of and Advanced Technology for Cost-Effective Manufacture of High Precision Engineering Products. Volume 4. Thermal Effects on the Accuracy of Numerically Controlled Machine Tools.

    DTIC Science & Technology

    1985-10-01

    83K0385 FINAL REPORT D Vol. 4 00 THERMAL EFFECTS ON THE ACCURACY OF LD NUME" 1ICALLY CONTROLLED MACHINE TOOLS PREPARED BY I Raghunath Venugopal and M...OF NUMERICALLY CONTROLLED MACHINE TOOLS 12 PERSONAL AJ’HOR(S) Venunorial, Raghunath and M. M. Barash 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF...TOOLS Prepared by Raghunath Venugopal and M. M. Barash Accesion For Unannounced 0 Justification ........................................... October 1085

  18. VALIDATING the Accuracy of Sighten's Automated Shading Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solar companies - including installers, financiers, and distributors - leverage Sighten software to deliver accurate shading calculations and solar proposals. Sighten recently partnered with Google Project Sunroof to provide automated remote shading analysis directly within the Sighten platform. The National Renewable Energy Laboratory (NREL), in partnership with Sighten, independently verified the accuracy of Sighten's remote-shading solar access values (SAVs) on an annual basis for locations in Los Angeles, California, and Denver, Colorado.

  19. Spinal intra-operative three-dimensional navigation with infra-red tool tracking: correlation between clinical and absolute engineering accuracy

    NASA Astrophysics Data System (ADS)

    Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.

  20. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  1. Pedicle Screw Insertion Accuracy Using O-Arm, Robotic Guidance, or Freehand Technique: A Comparative Study.

    PubMed

    Laudato, Pietro Aniello; Pierzchala, Katarzyna; Schizas, Constantin

    2018-03-15

    A retrospective radiological study. The aim of this study was to evaluate the accuracy of pedicle screw insertion using O-Arm navigation, robotic assistance, or a freehand fluoroscopic technique. Pedicle screw insertion using either "O-Arm" navigation or robotic devices is gaining popularity. Although several studies are available evaluating each of those techniques separately, no direct comparison has been attempted. Eighty-four patients undergoing implantation of 569 lumbar and thoracic screws were divided into three groups. Eleven patients (64 screws) had screws inserted using robotic assistance, 25 patients (191 screws) using the O-arm, while 48 patients (314 screws) had screws inserted using lateral fluoroscopy in a freehand technique. A single experienced spine surgeon assisted by a spinal fellow performed all procedures. Screw placement accuracy was assessed by two independent observers on postoperative computed tomography (CTs) according to the A to D Rampersaud criteria. No statistically significant difference was noted between the three groups. About 70.4% of screws in the freehand group, 69.6% in the O arm group, and 78.8% in the robotic group were placed completely within the pedicle margins (grade A) (P > 0.05). About 6.4% of screws were considered misplaced (grades C&D) in the freehand group, 4.2% in the O-arm group, and 4.7% in the robotic group (P > 0.05). The spinal fellow inserted screws with the same accuracy as the senior surgeon (P > 0.05). The advent of new technologies does not appear to alter accuracy of screw placement in our setting. Under supervision, spinal fellows might perform equally well to experienced surgeons using new tools. The lack of difference in accuracy does not imply that the above-mentioned techniques have no added advantages. Other issues, such as surgeon/patient radiation, fiddle factor, teaching suitability, etc., outside the scope of our present study, need further assessment. 3.

  2. A Promising Tool to Achieve Chemical Accuracy for Density Functional Theory Calculations on Y-NO Homolysis Bond Dissociation Energies

    PubMed Central

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689

  3. Assessment of neuropsychiatric symptoms in dementia: toward improving accuracy

    PubMed Central

    Stella, Florindo

    2013-01-01

    The issue of this article concerned the discussion about tools frequently used tools for assessing neuropsychiatric symptoms of patients with dementia, particularly Alzheimer's disease. The aims were to discuss the main tools for evaluating behavioral disturbances, and particularly the accuracy of the Neuropsychiatric Inventory – Clinician Rating Scale (NPI-C). The clinical approach to and diagnosis of neuropsychiatric syndromes in dementia require suitable accuracy. Advances in the recognition and early accurate diagnosis of psychopathological symptoms help guide appropriate pharmacological and non-pharmacological interventions. In addition, recommended standardized and validated measurements contribute to both scientific research and clinical practice. Emotional distress, caregiver burden, and cognitive impairment often experienced by elderly caregivers, may affect the quality of caregiver reports. The clinician rating approach helps attenuate these misinterpretations. In this scenario, the NPI-C is a promising and versatile tool for assessing neuropsychiatric syndromes in dementia, offering good accuracy and high reliability, mainly based on the diagnostic impression of the clinician. This tool can provide both strategies: a comprehensive assessment of neuropsychiatric symptoms in dementia or the investigation of specific psychopathological syndromes such as agitation, depression, anxiety, apathy, sleep disorders, and aberrant motor disorders, among others. PMID:29213846

  4. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    PubMed

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  5. Test Accuracy of Informant-Based Cognitive Screening Tests for Diagnosis of Dementia and Multidomain Cognitive Impairment in Stroke.

    PubMed

    McGovern, Aine; Pendlebury, Sarah T; Mishra, Nishant K; Fan, Yuhua; Quinn, Terence J

    2016-02-01

    Poststroke cognitive assessment can be performed using standardized questionnaires designed for family or care givers. We sought to describe the test accuracy of such informant-based assessments for diagnosis of dementia/multidomain cognitive impairment in stroke. We performed a systematic review using a sensitive search strategy across multidisciplinary electronic databases. We created summary test accuracy metrics and described reporting and quality using STARDdem and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tools, respectively. From 1432 titles, we included 11 studies. Ten papers used the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Four studies described IQCODE for diagnosis of poststroke dementia (n=1197); summary sensitivity: 0.81 (95% confidence interval, 0.60-0.93); summary specificty: 0.83 (95% confidence interval, 0.64-0.93). Five studies described IQCODE as tool for predicting future dementia (n=837); summary sensitivity: 0.60 (95% confidence interval, 0.32-0.83); summary specificity: 0.97 (95% confidence interval, 0.70-1.00). All papers had issues with at least 1 aspect of study reporting or quality. There is a limited literature on informant cognitive assessments in stroke. IQCODE as a diagnostic tool has test properties similar to other screening tools, IQCODE as a prognostic tool is specific but insensitive. We found no papers describing test accuracy of informant tests for diagnosis of prestroke cognitive decline, few papers on poststroke dementia and all included papers had issues with potential bias. © 2015 American Heart Association, Inc.

  6. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  7. Accuracy of a Screening Tool for Early Identification of Language Impairment

    ERIC Educational Resources Information Center

    Uilenburg, Noëlle; Wiefferink, Karin; Verkerk, Paul; van Denderen, Margot; van Schie, Carla; Oudesluys-Murphy, Ann-Marie

    2018-01-01

    Purpose: A screening tool called the "VTO Language Screening Instrument" (VTO-LSI) was developed to enable more uniform and earlier detection of language impairment. This report, consisting of 2 retrospective studies, focuses on the effects of using the VTO-LSI compared to regular detection procedures. Method: Study 1 retrospectively…

  8. Diagnostic accuracy of self-administered urine glucose test strips as a diabetes screening tool in a low-resource setting in Cambodia.

    PubMed

    Storey, Helen L; van Pelt, Maurits H; Bun, Socheath; Daily, Frances; Neogi, Tina; Thompson, Matthew; McGuire, Helen; Weigl, Bernhard H

    2018-03-22

    Screening for diabetes in low-resource countries is a growing challenge, necessitating tests that are resource and context appropriate. The aim of this study was to determine the diagnostic accuracy of a self-administered urine glucose test strip compared with alternative diabetes screening tools in a low-resource setting of Cambodia. Prospective cross-sectional study. Members of the Borey Santepheap Community in Cambodia (Phnom Penh Municipality, District Dangkao, Commune Chom Chao). All households on randomly selected streets were invited to participate, and adults at least 18 years of age living in the study area were eligible for inclusion. The accuracy of self-administered urine glucose test strip positivity, Hemoglobin A1c (HbA1c)>6.5% and capillary fasting blood glucose (cFBG) measurement ≥126 mg/dL were assessed against a composite reference standard of cFBGmeasurement ≥200 mg/dL or venous blood glucose 2 hours after oral glucose tolerance test (OGTT) ≥200 mg/dL. Of the 1289 participants, 234 (18%) had diabetes based on either cFBG measurement (74, 32%) or the OGTT (160, 68%). The urine glucose test strip was 14% sensitive and 99% specific and failed to identify 201 individuals with diabetes while falsely identifying 7 without diabetes. Those missed by the urine glucose test strip had lower venous fasting blood glucose, lower venous blood glucose 2 hours after OGTT and lower HbA1c compared with those correctly diagnosed. Low cost, easy to use diabetes tools are essential for low-resource communities with minimal infrastructure. While the urine glucose test strip may identify persons with diabetes that might otherwise go undiagnosed in these settings, its poor sensitivity cannot be ignored. The massive burden of diabetes in low-resource settings demands improvements in test technologies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted

  9. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  10. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration

    PubMed Central

    Cohen, Jérémie F; Korevaar, Daniël A; Altman, Douglas G; Bruns, David E; Gatsonis, Constantine A; Hooft, Lotty; Irwig, Les; Levine, Deborah; Reitsma, Johannes B; de Vet, Henrica C W; Bossuyt, Patrick M M

    2016-01-01

    Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports. PMID:28137831

  11. Amplitude Integrated Electroencephalography Compared With Conventional Video EEG for Neonatal Seizure Detection: A Diagnostic Accuracy Study.

    PubMed

    Rakshasbhuvankar, Abhijeet; Rao, Shripada; Palumbo, Linda; Ghosh, Soumya; Nagarajan, Lakshmi

    2017-08-01

    This diagnostic accuracy study compared the accuracy of seizure detection by amplitude-integrated electroencephalography with the criterion standard conventional video EEG in term and near-term infants at risk of seizures. Simultaneous recording of amplitude-integrated EEG (2-channel amplitude-integrated EEG with raw trace) and video EEG was done for 24 hours for each infant. Amplitude-integrated EEG was interpreted by a neonatologist; video EEG was interpreted by a neurologist independently. Thirty-five infants were included in the analysis. In the 7 infants with seizures on video EEG, there were 169 seizure episodes on video EEG, of which only 57 were identified by amplitude-integrated EEG. Amplitude-integrated EEG had a sensitivity of 33.7% for individual seizure detection. Amplitude-integrated EEG had an 86% sensitivity for detection of babies with seizures; however, it was nonspecific, in that 50% of infants with seizures detected by amplitude-integrated EEG did not have true seizures by video EEG. In conclusion, our study suggests that amplitude-integrated EEG is a poor screening tool for neonatal seizures.

  12. Evaluating the accuracy of the XVI dual registration tool compared with manual soft tissue matching to localise tumour volumes for post-prostatectomy patients receiving radiotherapy.

    PubMed

    Campbell, Amelia; Owen, Rebecca; Brown, Elizabeth; Pryor, David; Bernard, Anne; Lehman, Margot

    2015-08-01

    Cone beam computerised tomography (CBCT) enables soft tissue visualisation to optimise matching in the post-prostatectomy setting, but is associated with inter-observer variability. This study assessed the accuracy and consistency of automated soft tissue localisation using XVI's dual registration tool (DRT). Sixty CBCT images from ten post-prostatectomy patients were matched using: (i) the DRT and (ii) manual soft tissue registration by six radiation therapists (RTs). Shifts in the three Cartesian planes were recorded. The accuracy of the match was determined by comparing shifts to matches performed by two genitourinary radiation oncologists (ROs). A Bland-Altman method was used to assess the 95% levels of agreement (LoA). A clinical threshold of 3 mm was used to define equivalence between methods of matching. The 95% LoA between DRT-ROs in the superior/inferior, left/right and anterior/posterior directions were -2.21 to +3.18 mm, -0.77 to +0.84 mm, and -1.52 to +4.12 mm, respectively. The 95% LoA between RTs-ROs in the superior/inferior, left/right and anterior/posterior directions were -1.89 to +1.86 mm, -0.71 to +0.62 mm and -2.8 to +3.43 mm, respectively. Five DRT CBCT matches (8.33%) were outside the 3-mm threshold, all in the setting of bladder underfilling or rectal gas. The mean time for manual matching was 82 versus 65 s for DRT. XVI's DRT is comparable with RTs manually matching soft tissue on CBCT. The DRT can minimise RT inter-observer variability; however, involuntary bladder and rectal filling can influence the tools accuracy, highlighting the need for RT evaluation of the DRT match. © 2015 The Royal Australian and New Zealand College of Radiologists.

  13. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  14. Overinterpretation and misreporting of diagnostic accuracy studies: evidence of "spin".

    PubMed

    Ochodo, Eleanor A; de Haan, Margriet C; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M; Leeflang, Mariska M G

    2013-05-01

    To estimate the frequency of distorted presentation and overinterpretation of results in diagnostic accuracy studies. MEDLINE was searched for diagnostic accuracy studies published between January and June 2010 in journals with an impact factor of 4 or higher. Articles included were primary studies of the accuracy of one or more tests in which the results were compared with a clinical reference standard. Two authors scored each article independently by using a pretested data-extraction form to identify actual overinterpretation and practices that facilitate overinterpretation, such as incomplete reporting of study methods or the use of inappropriate methods (potential overinterpretation). The frequency of overinterpretation was estimated in all studies and in a subgroup of imaging studies. Of the 126 articles, 39 (31%; 95% confidence interval [CI]: 23, 39) contained a form of actual overinterpretation, including 29 (23%; 95% CI: 16, 30) with an overly optimistic abstract, 10 (8%; 96% CI: 3%, 13%) with a discrepancy between the study aim and conclusion, and eight with conclusions based on selected subgroups. In our analysis of potential overinterpretation, authors of 89% (95% CI: 83%, 94%) of the studies did not include a sample size calculation, 88% (95% CI: 82%, 94%) did not state a test hypothesis, and 57% (95% CI: 48%, 66%) did not report CIs of accuracy measurements. In 43% (95% CI: 34%, 52%) of studies, authors were unclear about the intended role of the test, and in 3% (95% CI: 0%, 6%) they used inappropriate statistical tests. A subgroup analysis of imaging studies showed 16 (30%; 95% CI: 17%, 43%) and 53 (100%; 95% CI: 92%, 100%) contained forms of actual and potential overinterpretation, respectively. Overinterpretation and misreporting of results in diagnostic accuracy studies is frequent in journals with high impact factors. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120527/-/DC1. © RSNA, 2013.

  15. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    PubMed

    Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F

    2015-12-01

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.

  16. The microcomputer scientific software series 4: testing prediction accuracy.

    Treesearch

    H. Michael Rauscher

    1986-01-01

    A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.

  17. Students as Toolmakers: Refining the Results in the Accuracy and Precision of a Trigonometric Activity

    ERIC Educational Resources Information Center

    Igoe, D. P.; Parisi, A. V.; Wagner, S.

    2017-01-01

    Smartphones used as tools provide opportunities for the teaching of the concepts of accuracy and precision and the mathematical concept of arctan. The accuracy and precision of a trigonometric experiment using entirely mechanical tools is compared to one using electronic tools, such as a smartphone clinometer application and a laser pointer. This…

  18. Tool wear compensation scheme for DTM

    NASA Astrophysics Data System (ADS)

    Sandeep, K.; Rao, U. S.; Balasubramaniam, R.

    2018-04-01

    This paper is aimed to monitor tool wear in diamond turn machining (DTM), assess effects of tool wear on accuracies of the machined component, and develop compensation methodology to enhance size and shape accuracies of a hemispherical cup. In order to find change in the centre and radius of tool with increasing wear of tool, a MATLAB program is used. In practice, x-offsets are readjusted by DTM operator for desired accuracy in the cup and the results of theoretical model show that change in radius and z-offset are insignificant however x-offset is proportional to the tool wear and this is what assumed while resetting tool offset. Since we could not measure the profile of tool; therefore we modeled our program for cup profile data. If we assume no error due to slide and spindle of DTM then any wear in the tool will be reflected in the cup profile. As the cup data contains surface roughness, therefore random noise similar to surface waviness is added. It is observed that surface roughness affects the centre and radius but pattern of shifting of centre with increase in wear of tool remains similar to the ideal condition, i.e. without surface roughness.

  19. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review.

    PubMed

    Zeng, Xiantao; Zhang, Yonggang; Kwong, Joey S W; Zhang, Chao; Li, Sheng; Sun, Feng; Niu, Yuming; Du, Liang

    2015-02-01

    To systematically review the methodological assessment tools for pre-clinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline. We searched PubMed, the Cochrane Handbook for Systematic Reviews of Interventions, Joanna Briggs Institute (JBI) Reviewers Manual, Centre for Reviews and Dissemination, Critical Appraisal Skills Programme (CASP), Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Clinical Excellence (NICE) up to May 20th, 2014. Two authors selected studies and extracted data; quantitative analysis was performed to summarize the characteristics of included tools. We included a total of 21 assessment tools for analysis. A number of tools were developed by academic organizations, and some were developed by only a small group of researchers. The JBI developed the highest number of methodological assessment tools, with CASP coming second. Tools for assessing the methodological quality of randomized controlled studies were most abundant. The Cochrane Collaboration's tool for assessing risk of bias is the best available tool for assessing RCTs. For cohort and case-control studies, we recommend the use of the Newcastle-Ottawa Scale. The Methodological Index for Non-Randomized Studies (MINORS) is an excellent tool for assessing non-randomized interventional studies, and the Agency for Healthcare Research and Quality (ARHQ) methodology checklist is applicable for cross-sectional studies. For diagnostic accuracy test studies, the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool is recommended; the SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool is available for assessing animal studies; Assessment of Multiple Systematic Reviews (AMSTAR) is a measurement tool for systematic reviews/meta-analyses; an 18-item tool has been developed for appraising case series studies, and the Appraisal of Guidelines, Research and Evaluation (AGREE

  20. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  1. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    PubMed

    Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F

    2015-12-01

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies. © 2015 American Association for Clinical Chemistry.

  2. Indexing of Diagnostic Accuracy Studies in MEDLINE and EMBASE

    PubMed Central

    Wilczynski, Nancy L.; Haynes, R. Brian

    2007-01-01

    Background: STAndards for Reporting of Diagnostic Accuracy (STARD) were published in 2003 and endorsed by some journals but not others. Objective: To determine whether the quality of indexing of diagnostic accuracy studies in MEDLINE and EMBASE has improved since the STARD statement was published. Design: Evaluate the change in the mean number of “accurate index terms” assigned to diagnostic accuracy studies, comparing STARD (endorsing) and non-STARD (non-endorsing) journals, for 2 years before and after STARD publication. Results: In MEDLINE, no differences in indexing quality were found for STARD and non-STARD journals before or after the STARD statement was published in 2003. In EMBASE, indexing in STARD journals improved compared with non-STARD journals (p = 0.02). However, articles in STARD journals had half the number of accurate indexing terms as articles in non-STARD journals, both before and after STARD statement publication (p < 0.001). PMID:18693947

  3. Indexing of diagnosis accuracy studies in MEDLINE and EMBASE.

    PubMed

    Wilczynski, Nancy L; Haynes, R Brian

    2007-10-11

    STAndards for Reporting of Diagnostic Accuracy (STARD) were published in 2003 and endorsed by some journals but not others. To determine whether the quality of indexing of diagnostic accuracy studies in MEDLINE and EMBASE has improved since the STARD statement was published. Evaluate the change in the mean number of "accurate index terms" assigned to diagnostic accuracy studies, comparing STARD (endorsing) and non-STARD (non-endorsing) journals, for 2 years before and after STARD publication. In MEDLINE, no differences in indexing quality were found for STARD and non-STARD journals before or after the STARD statement was published in 2003. In EMBASE, indexing in STARD journals improved compared with non-STARD journals (p = 0.02). However, articles in STARD journals had half the number of accurate indexing terms as articles in non-STARD journals, both before and after STARD statement publication (p < 0.001).

  4. Signal Detection Theory as a Tool for Successful Student Selection

    ERIC Educational Resources Information Center

    van Ooijen-van der Linden, Linda; van der Smagt, Maarten J.; Woertman, Liesbeth; te Pas, Susan F.

    2017-01-01

    Prediction accuracy of academic achievement for admission purposes requires adequate "sensitivity" and "specificity" of admission tools, yet the available information on the validity and predictive power of admission tools is largely based on studies using correlational and regression statistics. The goal of this study was to…

  5. High-accuracy mass spectrometry for fundamental studies.

    PubMed

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.

  6. An observational study of the accuracy and completeness of an anesthesia information management system: recommendations for documentation system changes.

    PubMed

    Wilbanks, Bryan A; Moss, Jacqueline A; Berner, Eta S

    2013-08-01

    Anesthesia information management systems must often be tailored to fit the environment in which they are implemented. Extensive customization necessitates that systems be analyzed for both accuracy and completeness of documentation design to ensure that the final record is a true representation of practice. The purpose of this study was to determine the accuracy of a recently installed system in the capture of key perianesthesia data. This study used an observational design and was conducted using a convenience sample of nurse anesthetists. Observational data of the nurse anesthetists'delivery of anesthesia care were collected using a touch-screen tablet computer utilizing an Access database customized observational data collection tool. A questionnaire was also administered to these nurse anesthetists to assess perceived accuracy, completeness, and satisfaction with the electronic documentation system. The major sources of data not documented in the system were anesthesiologist presence (20%) and placement of intravenous lines (20%). The major sources of inaccuracies in documentation were gas flow rates (45%), medication administration times (30%), and documentation of neuromuscular function testing (20%)-all of the sources of inaccuracies were related to the use of charting templates that were not altered to reflect the actual interventions performed.

  7. Meta-analysis diagnostic accuracy of SNP-based pathogenicity detection tools: a case of UTG1A1 gene mutations.

    PubMed

    Galehdari, Hamid; Saki, Najmaldin; Mohammadi-Asl, Javad; Rahim, Fakher

    2013-01-01

    Crigler-Najjar syndrome (CNS) type I and type II are usually inherited as autosomal recessive conditions that result from mutations in the UGT1A1 gene. The main objective of the present review is to summarize results of all available evidence on the accuracy of SNP-based pathogenicity detection tools compared to published clinical result for the prediction of in nsSNPs that leads to disease using prediction performance method. A comprehensive search was performed to find all mutations related to CNS. Database searches included dbSNP, SNPdbe, HGMD, Swissvar, ensemble, and OMIM. All the mutation related to CNS was extracted. The pathogenicity prediction was done using SNP-based pathogenicity detection tools include SIFT, PHD-SNP, PolyPhen2, fathmm, Provean, and Mutpred. Overall, 59 different SNPs related to missense mutations in the UGT1A1 gene, were reviewed. Comparing the diagnostic OR, PolyPhen2 and Mutpred have the highest detection 4.983 (95% CI: 1.24 - 20.02) in both, following by SIFT (diagnostic OR: 3.25, 95% CI: 1.07 - 9.83). The highest MCC of SNP-based pathogenicity detection tools, was belong to SIFT (34.19%) followed by Provean, PolyPhen2, and Mutpred (29.99%, 29.89%, and 29.89%, respectively). Hence the highest SNP-based pathogenicity detection tools ACC, was fit to SIFT (62.71%) followed by PolyPhen2, and Mutpred (61.02%, in both). Our results suggest that some of the well-established SNP-based pathogenicity detection tools can appropriately reflect the role of a disease-associated SNP in both local and global structures.

  8. Identification of facilitators and barriers to residents' use of a clinical reasoning tool.

    PubMed

    DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E

    2018-03-28

    While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.

  9. A Flexure-Based Tool Holder for Sub-(micro)m Positioning of a Single Point Cutting Tool on a Four-axis Lathe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bono, M J; Hibbard, R L

    2005-12-05

    A tool holder was designed to facilitate the machining of precision meso-scale components with complex three-dimensional shapes with sub-{micro}m accuracy on a four-axis lathe. A four-axis lathe incorporates a rotary table that allows the cutting tool to swivel with respect to the workpiece to enable the machining of complex workpiece forms, and accurately machining complex meso-scale parts often requires that the cutting tool be aligned precisely along the axis of rotation of the rotary table. The tool holder designed in this study has greatly simplified the process of setting the tool in the correct location with sub-{micro}m precision. The toolmore » holder adjusts the tool position using flexures that were designed using finite element analyses. Two flexures adjust the lateral position of the tool to align the center of the nose of the tool with the axis of rotation of the B-axis, and another flexure adjusts the height of the tool. The flexures are driven by manual micrometer adjusters, each of which provides a minimum increment of motion of 20 nm. This tool holder has simplified the process of setting a tool with sub-{micro}m accuracy, and it has significantly reduced the time required to set a tool.« less

  10. The Effect of Study Design Biases on the Diagnostic Accuracy of Magnetic Resonance Imaging to Detect Silicone Breast Implant Ruptures: A Meta-Analysis

    PubMed Central

    Song, Jae W.; Kim, Hyungjin Myra; Bellfi, Lillian T.; Chung, Kevin C.

    2010-01-01

    Background All silicone breast implant recipients are recommended by the US Food and Drug Administration to undergo serial screening to detect implant rupture with magnetic resonance imaging (MRI). We performed a systematic review of the literature to assess the quality of diagnostic accuracy studies utilizing MRI or ultrasound to detect silicone breast implant rupture and conducted a meta-analysis to examine the effect of study design biases on the estimation of MRI diagnostic accuracy measures. Method Studies investigating the diagnostic accuracy of MRI and ultrasound in evaluating ruptured silicone breast implants were identified using MEDLINE, EMBASE, ISI Web of Science, and Cochrane library databases. Two reviewers independently screened potential studies for inclusion and extracted data. Study design biases were assessed using the QUADAS tool and the STARDS checklist. Meta-analyses estimated the influence of biases on diagnostic odds ratios. Results Among 1175 identified articles, 21 met the inclusion criteria. Most studies using MRI (n= 10 of 16) and ultrasound (n=10 of 13) examined symptomatic subjects. Meta-analyses revealed that MRI studies evaluating symptomatic subjects had 14-fold higher diagnostic accuracy estimates compared to studies using an asymptomatic sample (RDOR 13.8; 95% CI 1.83–104.6) and 2-fold higher diagnostic accuracy estimates compared to studies using a screening sample (RDOR 1.89; 95% CI 0.05–75.7). Conclusion Many of the published studies utilizing MRI or ultrasound to detect silicone breast implant rupture are flawed with methodological biases. These methodological shortcomings may result in overestimated MRI diagnostic accuracy measures and should be interpreted with caution when applying the data to a screening population. PMID:21364405

  11. Left centro-parieto-temporal response to tool-gesture incongruity: an ERP study.

    PubMed

    Chang, Yi-Tzu; Chen, Hsiang-Yu; Huang, Yuan-Chieh; Shih, Wan-Yu; Chan, Hsiao-Lung; Wu, Ping-Yi; Meng, Ling-Fu; Chen, Chen-Chi; Wang, Ching-I

    2018-03-13

    Action semantics have been investigated in relation to context violation but remain less examined in relation to the meaning of gestures. In the present study, we examined tool-gesture incongruity by event-related potentials (ERPs) and hypothesized that the component N400, a neural index which has been widely used in both linguistic and action semantic congruence, is significant for conditions of incongruence. Twenty participants performed a tool-gesture judgment task, in which they were asked to judge whether the tool-gesture pairs were correct or incorrect, for the purpose of conveying functional expression of the tools. Online electroencephalograms and behavioral performances (the accuracy rate and reaction time) were recorded. The ERP analysis showed a left centro-parieto-temporal N300 effect (220-360 ms) for the correct condition. However, the expected N400 (400-550 ms) could not be differentiated between correct/incorrect conditions. After 700 ms, a prominent late negative complex for the correct condition was also found in the left centro-parieto-temporal area. The neurophysiological findings indicated that the left centro-parieto-temporal area is the predominant region contributing to neural processing for tool-gesture incongruity in right-handers. The temporal dynamics of tool-gesture incongruity are: (1) firstly enhanced for recognizable tool-gesture using patterns, (2) and require a secondary reanalysis for further examination of the highly complicated visual structures of gestures and tools. The evidence from the tool-gesture incongruity indicated altered brain activities attributable to the N400 in relation to lexical and action semantics. The online interaction between gesture and tool processing provided minimal context violation or anticipation effect, which may explain the missing N400.

  12. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  13. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  14. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  16. The automatic back-check mechanism of mask tooling database and automatic transmission of mask tooling data

    NASA Astrophysics Data System (ADS)

    Xu, Zhe; Peng, M. G.; Tu, Lin Hsin; Lee, Cedric; Lin, J. K.; Jan, Jian Feng; Yin, Alb; Wang, Pei

    2006-10-01

    Nowadays, most foundries have paid more and more attention in order to reduce the CD width. Although the lithography technologies have developed drastically, mask data accuracy is still a big challenge than before. Besides, mask (reticle) price also goes up drastically such that data accuracy needs more special treatments.We've developed a system called eFDMS to guarantee the mask data accuracy. EFDMS is developed to do the automatic back-check of mask tooling database and the data transmission of mask tooling. We integrate our own EFDMS systems to engage with the standard mask tooling system K2 so that the upriver and the downriver processes of the mask tooling main body K2 can perform smoothly and correctly with anticipation. The competition in IC marketplace is changing from high-tech process to lower-price gradually. How to control the reduction of the products' cost more plays a significant role in foundries. Before the violent competition's drawing nearer, we should prepare the cost task ahead of time.

  17. Efficient strategies to find diagnostic test accuracy studies in kidney journals.

    PubMed

    Rogerson, Thomas E; Ladhani, Maleeka; Mitchell, Ruth; Craig, Jonathan C; Webster, Angela C

    2015-08-01

    Nephrologists looking for quick answers to diagnostic clinical questions in MEDLINE can use a range of published search strategies or Clinical Query limits to improve the precision of their searches. We aimed to evaluate existing search strategies for finding diagnostic test accuracy studies in nephrology journals. We assessed the accuracy of 14 search strategies for retrieving diagnostic test accuracy studies from three nephrology journals indexed in MEDLINE. Two investigators hand searched the same journals to create a reference set of diagnostic test accuracy studies to compare search strategy results against. We identified 103 diagnostic test accuracy studies, accounting for 2.1% of all studies published. The most specific search strategy was the Narrow Clinical Queries limit (sensitivity: 0.20, 95% CI 0.13-0.29; specificity: 0.99, 95% CI 0.99-0.99). Using the Narrow Clinical Queries limit, a searcher would need to screen three (95% CI 2-6) articles to find one diagnostic study. The most sensitive search strategy was van der Weijden 1999 Extended (sensitivity: 0.95; 95% CI 0.89-0.98; specificity 0.55, 95% CI 0.53-0.56) but required a searcher to screen 24 (95% CI 23-26) articles to find one diagnostic study. Bachmann 2002 was the best balanced search strategy, which was sensitive (0.88, 95% CI 0.81-0.94), but also specific (0.74, 95% CI 0.73-0.75), with a number needed to screen of 15 (95% CI 14-17). Diagnostic studies are infrequently published in nephrology journals. The addition of a strategy for diagnostic studies to a subject search strategy in MEDLINE may reduce the records needed to screen while preserving adequate search sensitivity for routine clinical use. © 2015 Asian Pacific Society of Nephrology.

  18. Accuracy of Focused Assessment with Sonography for Trauma (FAST) in Blunt Trauma Abdomen-A Prospective Study.

    PubMed

    Kumar, Subodh; Bansal, Virinder Kumar; Muduly, Dillip Kumar; Sharma, Pawan; Misra, Mahesh C; Chumber, Sunil; Singh, Saraman; Bhardwaj, D N

    2015-12-01

    Focused assessment with sonography for trauma (FAST) is a limited ultrasound examination, primarily aimed at the identification of the presence of free intraperitoneal or pericardial fluid. In the context of blunt trauma abdomen (BTA), free fluid is usually due to hemorrhage, bowel contents, or both; contributes towards the timely diagnosis of potentially life-threatening hemorrhage; and is a decision-making tool to help determine the need for further evaluation or operative intervention. Fifty patients with blunt trauma abdomen were evaluated prospectively with FAST. The findings of FAST were compared with contrast-enhanced computed tomography (CECT), laparotomy, and autopsy. Any free fluid in the abdomen was presumed to be hemoperitoneum. Sonographic findings of intra-abdominal free fluid were confirmed by CECT, laparotomy, or autopsy wherever indicated. In comparing with CECT scan, FAST had a sensitivity, specificity, and accuracy of 77.27, 100, and 79.16 %, respectively, in the detection of free fluid. When compared with surgical findings, it had a sensitivity, specificity, and accuracy of 94.44, 50, and 90 %, respectively. The sensitivity of FAST was 75 % in determining free fluid in patients who died when compared with autopsy findings. Overall sensitivity, specificity, and accuracy of FAST were 80.43, 75 and 80 %, respectively, for the detection of free fluid in the abdomen. From this study, we can safely conclude that FAST is a rapid, reliable, and feasible investigation in patients with BTA, and it can be performed easily, safely, and quickly in the emergency room with a reasonable sensitivity, specificity, and accuracy. It helps in the initial triage of patients for assessing the need for urgent surgery.

  19. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  20. Resident accuracy of joint line palpation using ultrasound verification.

    PubMed

    Rho, Monica E; Chu, Samuel K; Yang, Aaron; Hameed, Farah; Lin, Cindy Yuchin; Hurh, Peter J

    2014-10-01

    To determine the accuracy of knee and acromioclavicular (AC) joint line palpation in Physical Medicine and Rehabilitation (PM&R) residents using ultrasound (US) verification. Cohort study. PM&R residency program at an academic institution. Twenty-four PM&R residents participating in a musculoskeletal US course (7 PGY-2, 8 PGY-3, and 9 PGY4 residents). Twenty-four PM&R residents participating in an US course were asked to palpate the AC joint and lateral joint line of the knee in a female and male model before the start of the course. Once the presumed joint line was localized, the residents were asked to tape an 18-gauge, 1.5-inch, blunt-tip needle parallel to the joint line on the overlying skin. The accuracy of needle placement over the joint line was verified using US. US verification of correct needle placement over the joint line. Overall AC joint palpation accuracy was 16.7%, and knee lateral joint line palpation accuracy was 58.3%. Based on the resident level of education, using a value of P < .05, there were no statistically significant differences in the accuracy of joint line palpation. Residents in this study demonstrate poor accuracy of AC joint and lateral knee joint line identification by palpation, using US as the criterion standard for verification. There were no statistically significant differences in the accuracy rates of joint line palpation based on resident level of education. US may be a useful tool to use to advance the current methods of teaching the physical examination in medical education. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  1. Empirical evidence of the importance of comparative studies of diagnostic test accuracy.

    PubMed

    Takwoingi, Yemisi; Leeflang, Mariska M G; Deeks, Jonathan J

    2013-04-02

    Systematic reviews that "compare" the accuracy of 2 or more tests often include different sets of studies for each test. To investigate the availability of direct comparative studies of test accuracy and to assess whether summary estimates of accuracy differ between meta-analyses of noncomparative and comparative studies. Systematic reviews in any language from the Database of Abstracts of Reviews of Effects and the Cochrane Database of Systematic Reviews from 1994 to October 2012. 1 of 2 assessors selected reviews that evaluated at least 2 tests and identified meta-analyses that included both noncomparative studies and comparative studies. 1 of 3 assessors extracted data about review and study characteristics and test performance. 248 reviews compared test accuracy; of the 6915 studies, 2113 (31%) were comparative. Thirty-six reviews (with 52 meta-analyses) had adequate studies to compare results of noncomparative and comparative studies by using a hierarchical summary receiver-operating characteristic meta-regression model for each test comparison. In 10 meta-analyses, noncomparative studies ranked tests in the opposite order of comparative studies. A total of 25 meta-analyses showed more than a 2-fold discrepancy in the relative diagnostic odds ratio between noncomparative and comparative studies. Differences in accuracy estimates between noncomparative and comparative studies were greater than expected by chance (P < 0.001). A paucity of comparative studies limited exploration of direction in bias. Evidence derived from noncomparative studies often differs from that derived from comparative studies. Robustly designed studies in which all patients receive all tests or are randomly assigned to receive one or other of the tests should be more routinely undertaken and are preferred for evidence to guide test selection. National Institute for Health Research (United Kingdom).

  2. Usefulness and accuracy of MALDI-TOF mass spectrometry as a supplementary tool to identify mosquito vector species and to invest in development of international database.

    PubMed

    Raharimalala, F N; Andrianinarivomanana, T M; Rakotondrasoa, A; Collard, J M; Boyer, S

    2017-09-01

    Arthropod-borne diseases are important causes of morbidity and mortality. The identification of vector species relies mainly on morphological features and/or molecular biology tools. The first method requires specific technical skills and may result in misidentifications, and the second method is time-consuming and expensive. The aim of the present study is to assess the usefulness and accuracy of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) as a supplementary tool with which to identify mosquito vector species and to invest in the creation of an international database. A total of 89 specimens belonging to 10 mosquito species were selected for the extraction of proteins from legs and for the establishment of a reference database. A blind test with 123 mosquitoes was performed to validate the MS method. Results showed that: (a) the spectra obtained in the study with a given species differed from the spectra of the same species collected in another country, which highlights the need for an international database; (b) MALDI-TOF MS is an accurate method for the rapid identification of mosquito species that are referenced in a database; (c) MALDI-TOF MS allows the separation of groups or complex species, and (d) laboratory specimens undergo a loss of proteins compared with those isolated in the field. In conclusion, MALDI-TOF MS is a useful supplementary tool for mosquito identification and can help inform vector control. © 2017 The Royal Entomological Society.

  3. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  4. Macular versus Retinal Nerve Fiber Layer Parameters for Diagnosing Manifest Glaucoma: A Systematic Review of Diagnostic Accuracy Studies.

    PubMed

    Oddone, Francesco; Lucenteforte, Ersilia; Michelessi, Manuele; Rizzo, Stanislao; Donati, Simone; Parravano, Mariacristina; Virgili, Gianni

    2016-05-01

    Macular parameters have been proposed as an alternative to retinal nerve fiber layer (RNFL) parameters to diagnose glaucoma. Comparing the diagnostic accuracy of macular parameters, specifically the ganglion cell complex (GCC) and ganglion cell inner plexiform layer (GCIPL), with the accuracy of RNFL parameters for detecting manifest glaucoma is important to guide clinical practice and future research. Studies using spectral domain optical coherence tomography (SD OCT) and reporting macular parameters were included if they allowed the extraction of accuracy data for diagnosing manifest glaucoma, as confirmed with automated perimetry or a clinician's optic nerve head (ONH) assessment. Cross-sectional cohort studies and case-control studies were included. The QUADAS 2 tool was used to assess methodological quality. Only direct comparisons of macular versus RNFL parameters (i.e., in the same study) were conducted. Summary sensitivity and specificity of each macular or RNFL parameter were reported, and the relative diagnostic odds ratio (DOR) was calculated in hierarchical summary receiver operating characteristic (HSROC) models to compare them. Thirty-four studies investigated macular parameters using RTVue OCT (Optovue Inc., Fremont, CA) (19 studies, 3094 subjects), Cirrus OCT (Carl Zeiss Meditec Inc., Dublin, CA) (14 studies, 2164 subjects), or 3D Topcon OCT (Topcon, Inc., Tokyo, Japan) (4 studies, 522 subjects). Thirty-two of these studies allowed comparisons between macular and RNFL parameters. Studies generally reported sensitivities at fixed specificities, more commonly 0.90 or 0.95, with sensitivities of most best-performing parameters between 0.65 and 0.75. For all OCT devices, compared with RNFL parameters, macular parameters were similarly or slightly less accurate for detecting glaucoma at the highest reported specificity, which was confirmed in analyses at the lowest specificity. Included studies suffered from limitations, especially the case-control study

  5. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  6. Clinical acceptance and accuracy assessment of spinal implants guided with SpineAssist surgical robot: retrospective study.

    PubMed

    Devito, Dennis P; Kaplan, Leon; Dietl, Rupert; Pfeiffer, Michael; Horne, Dale; Silberstein, Boris; Hardenbrook, Mitchell; Kiriyanthan, George; Barzilay, Yair; Bruskin, Alexander; Sackerer, Dieter; Alexandrovsky, Vitali; Stüer, Carsten; Burger, Ralf; Maeurer, Johannes; Donald, Gordon D; Gordon, Donald G; Schoenmayr, Robert; Friedlander, Alon; Knoller, Nachshon; Schmieder, Kirsten; Pechlivanis, Ioannis; Kim, In-Se; Meyer, Bernhard; Shoham, Moshe

    2010-11-15

    Retrospective, multicenter study of robotically-guided spinal implant insertions. Clinical acceptance of the implants was assessed by intraoperative radiograph, and when available, postoperative computed tomography (CT) scans were used to determine placement accuracy. To verify the clinical acceptance and accuracy of robotically-guided spinal implants and compare to those of unguided free-hand procedures. SpineAssist surgical robot has been used to guide implants and guide-wires to predefined locations in the spine. SpineAssist which, to the best of the authors' knowledge, is currently the sole robot providing surgical assistance in positioning tools in the spine, guided over 840 cases in 14 hospitals, between June 2005 and June 2009. Clinical acceptance of 3271 pedicle screws and guide-wires inserted in 635 reported cases was assessed by intraoperative fluoroscopy, where placement accuracy of 646 pedicle screws inserted in 139 patients was measured using postoperative CT scans. Screw placements were found to be clinically acceptable in 98% of the cases when intraoperatively assessed by fluoroscopic images. Measurements derived from postoperative CT scans demonstrated that 98.3% of the screws fell within the safe zone, where 89.3% were completely within the pedicle and 9% breached the pedicle by up to 2 mm. The remaining 1.4% of the screws breached between 2 and 4 mm, while only 2 screws (0.3%) deviated by more than 4 mm from the pedicle wall. Neurologic deficits were observed in 4 cases yet, following revisions, no permanent nerve damage was encountered, in contrast to the 0.6% to 5% of neurologic damage reported in the literature. SpineAssist offers enhanced performance in spinal surgery when compared to free-hand surgeries, by increasing placement accuracy and reducing neurologic risks. In addition, 49% of the cases reported herein used a percutaneous approach, highlighting the contribution of SpineAssist in procedures without anatomic landmarks.

  7. Guidance for deriving and presenting percentage study weights in meta-analysis of test accuracy studies.

    PubMed

    Burke, Danielle L; Ensor, Joie; Snell, Kym I E; van der Windt, Danielle; Riley, Richard D

    2018-06-01

    Percentage study weights in meta-analysis reveal the contribution of each study toward the overall summary results and are especially important when some studies are considered outliers or at high risk of bias. In meta-analyses of test accuracy reviews, such as a bivariate meta-analysis of sensitivity and specificity, the percentage study weights are not currently derived. Rather, the focus is on representing the precision of study estimates on receiver operating characteristic plots by scaling the points relative to the study sample size or to their standard error. In this article, we recommend that researchers should also provide the percentage study weights directly, and we propose a method to derive them based on a decomposition of Fisher information matrix. This method also generalises to a bivariate meta-regression so that percentage study weights can also be derived for estimates of study-level modifiers of test accuracy. Application is made to two meta-analyses examining test accuracy: one of ear temperature for diagnosis of fever in children and the other of positron emission tomography for diagnosis of Alzheimer's disease. These highlight that the percentage study weights provide important information that is otherwise hidden if the presentation only focuses on precision based on sample size or standard errors. Software code is provided for Stata, and we suggest that our proposed percentage weights should be routinely added on forest and receiver operating characteristic plots for sensitivity and specificity, to provide transparency of the contribution of each study toward the results. This has implications for the PRISMA-diagnostic test accuracy guidelines that are currently being produced. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.

    PubMed

    Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P

    2011-09-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  10. Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset

    PubMed Central

    Lipps, David; Devineni, Sree

    2016-01-01

    MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy

  11. Is early detection of abused children possible?: a systematic review of the diagnostic accuracy of the identification of abused children

    PubMed Central

    2013-01-01

    Background Early detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening. Methods We searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria. Results A total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%. Conclusions In 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of

  12. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  13. Research on effect of rough surface on FMCW laser radar range accuracy

    NASA Astrophysics Data System (ADS)

    Tao, Huirong

    2018-03-01

    The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.

  14. Implementation of a standardized electronic tool improves compliance, accuracy, and efficiency of trainee-to-trainee patient care handoffs after complex general surgical oncology procedures.

    PubMed

    Clarke, Callisia N; Patel, Sameer H; Day, Ryan W; George, Sobha; Sweeney, Colin; Monetes De Oca, Georgina Avaloa; Aiss, Mohamed Ait; Grubbs, Elizabeth G; Bednarski, Brian K; Lee, Jeffery E; Bodurka, Diane C; Skibber, John M; Aloia, Thomas A

    2017-03-01

    Duty-hour regulations have increased the frequency of trainee-trainee patient handoffs. Each handoff creates a potential source for communication errors that can lead to near-miss and patient-harm events. We investigated the utility, efficacy, and trainee experience associated with implementation of a novel, standardized, electronic handoff system. We conducted a prospective intervention study of trainee-trainee handoffs of inpatients undergoing complex general surgical oncology procedures at a large tertiary institution. Preimplementation data were measured using trainee surveys and direct observation and by tracking delinquencies in charting. A standardized electronic handoff tool was created in a research electronic data capture (REDCap) database using the previously validated I-PASS methodology (illness severity, patient summary, action list, situational awareness and contingency planning, and synthesis). Electronic handoff was augmented by direct communication via phone or face-to-face interaction for inpatients deemed "watcher" or "unstable." Postimplementation handoff compliance, communication errors, and trainee work flow were measured and compared to preimplementation values using standard statistical analysis. A total of 474 handoffs (203 preintervention and 271 postintervention) were observed over the study period; 86 handoffs involved patients admitted to the surgical intensive care unit, 344 patients admitted to the surgical stepdown unit, and 44 patients on the surgery ward. Implementation of the structured electronic tool resulted in an increase in trainee handoff compliance from 73% to 96% (P < .001) and decreased errors in communication by 50% (P = .044) while improving trainee efficiency and workflow. A standardized electronic tool augmented by direct communication for higher acuity patients can improve compliance, accuracy, and efficiency of handoff communication between surgery trainees. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Recommended aerobic fitness level for metabolic health in children and adolescents: a study of diagnostic accuracy.

    PubMed

    Adegboye, Amanda R A; Anderssen, Sigmund A; Froberg, Karsten; Sardinha, Luis B; Heitmann, Berit L; Steene-Johannessen, Jostein; Kolle, Elin; Andersen, Lars B

    2011-07-01

    To define the optimal cut-off for low aerobic fitness and to evaluate its accuracy to predict clustering of risk factors for cardiovascular disease in children and adolescents. Study of diagnostic accuracy using a cross-sectional database. European Youth Heart Study including Denmark, Portugal, Estonia and Norway. 4500 schoolchildren aged 9 or 15 years. Aerobic fitness was expressed as peak oxygen consumption relative to bodyweight (mlO(2)/min/kg). Risk factors included in the composite risk score (mean of z-scores) were systolic blood pressure, triglyceride, total cholesterol/HDL-cholesterol ratio, insulin resistance and sum of four skinfolds. 14.5% of the sample, with a risk score above one SD, were defined as being at risk. Receiver operating characteristic analysis was used to define the optimal cut-off for sex and age-specific distribution. In girls, the optimal cut-offs for identifying individuals at risk were: 37.4 mlO(2)/min/kg (9-year-old) and 33.0 mlO(2)/min/kg (15-year-old). In boys, the optimal cut-offs were 43.6 mlO(2)/min/kg (9-year-old) and 46.0 mlO(2)/min/kg (15-year-old). Specificity (range 79.3-86.4%) was markedly higher than sensitivity (range 29.7-55.6%) for all cut-offs. Positive predictive values ranged from 19% to 41% and negative predictive values ranged from 88% to 90%. The diagnostic accuracy for identifying children at risk, measured by the area under the curve (AUC), was significantly higher than what would be expected by chance (AUC >0.5) for all cut-offs. Aerobic fitness is easy to measure, and is an accurate tool for screening children with clustering of cardiovascular risk factors. Promoting physical activity in children with aerobic fitness level lower than the suggested cut-points might improve their health.

  16. Low clinical diagnostic accuracy of early vs advanced Parkinson disease: clinicopathologic study.

    PubMed

    Adler, Charles H; Beach, Thomas G; Hentz, Joseph G; Shill, Holly A; Caviness, John N; Driver-Dunckley, Erika; Sabbagh, Marwan N; Sue, Lucia I; Jacobson, Sandra A; Belden, Christine M; Dugger, Brittany N

    2014-07-29

    Determine diagnostic accuracy of a clinical diagnosis of Parkinson disease (PD) using neuropathologic diagnosis as the gold standard. Data from the Arizona Study of Aging and Neurodegenerative Disorders were used to determine the predictive value of a clinical PD diagnosis, using 2 clinical diagnostic confidence levels, PossPD (never treated or not clearly responsive) and ProbPD (responsive to medications). Neuropathologic diagnosis was the gold standard. Based on first visit, 9 of 34 (26%) PossPD cases had neuropathologically confirmed PD while 80 of 97 (82%) ProbPD cases had confirmed PD. PD was confirmed in 8 of 15 (53%) ProbPD cases with <5 years of disease duration and 72 of 82 (88%) with ≥5 years of disease duration. Using final diagnosis at time of death, 91 of 107 (85%) ProbPD cases had confirmed PD. Clinical variables that improved diagnostic accuracy were medication response, motor fluctuations, dyskinesias, and hyposmia. Using neuropathologic findings of PD as the gold standard, this study establishes the novel findings of only 26% accuracy for a clinical diagnosis of PD in untreated or not clearly responsive subjects, 53% accuracy in early PD responsive to medication (<5 years' duration), and >85% diagnostic accuracy of longer duration, medication-responsive PD. Caution is needed when interpreting clinical studies of PD, especially studies of early disease that do not have autopsy confirmation. The need for a tissue or other diagnostic biomarker is reinforced. This study provides Class II evidence that a clinical diagnosis of PD identifies patients who will have pathologically confirmed PD with a sensitivity of 88% and specificity of 68%. © 2014 American Academy of Neurology.

  17. Smart tool holder

    DOEpatents

    Day, Robert Dean; Foreman, Larry R.; Hatch, Douglas J.; Meadows, Mark S.

    1998-01-01

    There is provided an apparatus for machining surfaces to accuracies within the nanometer range by use of electrical current flow through the contact of the cutting tool with the workpiece as a feedback signal to control depth of cut.

  18. Comparative Study With New Accuracy Metrics for Target Volume Contouring in PET Image Guided Radiation Therapy

    PubMed Central

    Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H

    2017-01-01

    The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898

  19. Using Meta-Analysis to Inform the Design of Subsequent Studies of Diagnostic Test Accuracy

    ERIC Educational Resources Information Center

    Hinchliffe, Sally R.; Crowther, Michael J.; Phillips, Robert S.; Sutton, Alex J.

    2013-01-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial…

  20. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  1. Investigation of influence of errors of cutting machines with CNC on displacement trajectory accuracy of their actuating devices

    NASA Astrophysics Data System (ADS)

    Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.

    2018-03-01

    In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.

  2. Smart tool holder

    DOEpatents

    Day, R.D.; Foreman, L.R.; Hatch, D.J.; Meadows, M.S.

    1998-09-08

    There is provided an apparatus for machining surfaces to accuracies within the nanometer range by use of electrical current flow through the contact of the cutting tool with the workpiece as a feedback signal to control depth of cut. 3 figs.

  3. Ontario multidetector computed tomographic coronary angiography study: field evaluation of diagnostic accuracy.

    PubMed

    Chow, Benjamin J W; Freeman, Michael R; Bowen, James M; Levin, Leslie; Hopkins, Robert B; Provost, Yves; Tarride, Jean-Eric; Dennie, Carole; Cohen, Eric A; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan R; Paul, Narinder; Parker, John D; O'Reilly, Daria J; Xie, Feng; Goeree, Ron

    2011-06-13

    Computed tomographic coronary angiography (CTCA) has gained clinical acceptance for the detection of obstructive coronary artery disease. Although single-center studies have demonstrated excellent accuracy, multicenter studies have yielded variable results. The true diagnostic accuracy of CTCA in the "real world" remains uncertain. We conducted a field evaluation comparing multidetector CTCA with invasive CA (ICA) to understand CTCA's diagnostic accuracy in a real-world setting. A multicenter cohort study of patients awaiting ICA was conducted between September 2006 and June 2009. All patients had either a low or an intermediate pretest probability for coronary artery disease and underwent CTCA and ICA within 10 days. The results of CTCA and ICA were interpreted visually by local expert observers who were blinded to all clinical data and imaging results. Using a patient-based analysis (diameter stenosis ≥50%) of 169 patients, the sensitivity, specificity, positive predictive value, and negative predictive value were 81.3% (95% confidence interval [CI], 71.0%-89.1%), 93.3% (95% CI, 85.9%-97.5%), 91.6% (95% CI, 82.5%-96.8%), and 84.7% (95% CI, 76.0%-91.2%), respectively; the area under receiver operating characteristic curve was 0.873. The diagnostic accuracy varied across centers (P < .001), with a sensitivity, specificity, positive predictive value, and negative predictive value ranging from 50.0% to 93.2%, 92.0% to 100%, 84.6% to 100%, and 42.9% to 94.7%, respectively. Compared with ICA, CTCA appears to have good accuracy; however, there was variability in diagnostic accuracy across centers. Factors affecting institutional variability need to be better understood before CTCA is universally adopted. Additional real-world evaluations are needed to fully understand the impact of CTCA on clinical care. clinicaltrials.gov Identifier: NCT00371891.

  4. The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.

    PubMed

    Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E

    2009-11-01

    Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.

  5. An augmented reality tool for learning spatial anatomy on mobile devices.

    PubMed

    Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti

    2017-09-01

    Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Assessment of the accuracy and stability of frameless gamma knife radiosurgery.

    PubMed

    Chung, Hyun-Tai; Park, Woo-Yoon; Kim, Tae Hoon; Kim, Yong Kyun; Chun, Kook Jin

    2018-06-03

    The aim of this study was to assess the accuracy and stability of frameless gamma knife radiosurgery (GKRS). The accuracies of the radiation isocenter and patient couch movement were evaluated by film dosimetry with a half-year cycle. Radiation isocenter assessment with a diode detector and cone-beam computed tomography (CBCT) image accuracy tests were performed daily with a vendor-provided tool for one and a half years after installation. CBCT image quality was examined twice a month with a phantom. The accuracy of image coregistration using CBCT images was studied using magnetic resonance (MR) and computed tomography (CT) images of another phantom. The overall positional accuracy was measured in whole procedure tests using film dosimetry with an anthropomorphic phantom. The positional errors of the radiation isocenter at the center and at an extreme position were both less than 0.1 mm. The three-dimensional deviation of the CBCT coordinate system was stable for one and a half years (mean 0.04 ± 0.02 mm). Image coregistration revealed a difference of 0.2 ± 0.1 mm between CT and CBCT images and a deviation of 0.4 ± 0.2 mm between MR and CBCT images. The whole procedure test of the positional accuracy of the mask-based irradiation revealed an accuracy of 0.5 ± 0.6 mm. The radiation isocenter accuracy, patient couch movement accuracy, and Gamma Knife Icon CBCT accuracy were all approximately 0.1 mm and were stable for one and a half years. The coordinate system assigned to MR images through coregistration was more accurate than the system defined by fiducial markers. Possible patient motion during irradiation should be considered when evaluating the overall accuracy of frameless GKRS. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Accuracy and Reliability of Emergency Department Triage Using the Emergency Severity Index: An International Multicenter Assessment.

    PubMed

    Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S

    2018-05-01

    We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  8. Accuracy of biochemical markers for predicting nasogastric tube placement in adults--a systematic review of diagnostic studies.

    PubMed

    Fernandez, Ritin S; Chau, Janita Pak-Chun; Thompson, David R; Griffiths, Rhonda; Lo, Hoi-Shan

    2010-08-01

    The objective of this study was to investigate the diagnostic performance of biochemical tests used to determine placement of nasogastric (NG) tubes after insertion in adults. A systematic review of diagnostic studies was undertaken. A literature search of the bibliographic databases and the World Wide Web was performed to locate original diagnostic studies in English or Chinese on biochemical markers for detecting NG tube location. Studies in which one or more different tests were evaluated with a reference standard, and diagnostic values were reported or could be calculated were included. Two reviewers independently checked all abstracts and full text studies for inclusion criteria. Included studies were assessed for their quality using the QUADAS tool. Study features and diagnostic values were extracted from the included studies. Of the 10 studies included in this review, seven investigated the diagnostic accuracy of pH, one investigated the diagnostic accuracy of pH and bilirubin respectively, two a combination of pH and bilirubin and one a combination of pH, pepsin and trypsin levels in identifying NG tube location. All studies used X-rays as the reference standard for comparison. Pooled results demonstrated that a pH of studies and small sample sizes, conclusions about the diagnostic performance of the different tests cannot be drawn. Better designed studies exploring the accuracy of diagnostic tests are needed to improve

  9. A GIS Tool for evaluating and improving NEXRAD and its application in distributed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Srinivasan, R.

    2008-12-01

    In this study, a user friendly GIS tool was developed for evaluating and improving NEXRAD using raingauge data. This GIS tool can automatically read in raingauge and NEXRAD data, evaluate the accuracy of NEXRAD for each time unit, implement several geostatistical methods to improve the accuracy of NEXRAD through raingauge data, and output spatial precipitation map for distributed hydrologic model. The geostatistical methods incorporated in this tool include Simple Kriging with varying local means, Kriging with External Drift, Regression Kriging, Co-Kriging, and a new geostatistical method that was newly developed by Li et al. (2008). This tool was applied in two test watersheds at hourly and daily temporal scale. The preliminary cross-validation results show that incorporating raingauge data to calibrate NEXRAD can pronouncedly change the spatial pattern of NEXRAD and improve its accuracy. Using different geostatistical methods, the GIS tool was applied to produce long term precipitation input for a distributed hydrologic model - Soil and Water Assessment Tool (SWAT). Animated video was generated to vividly illustrate the effect of using different precipitation input data on distributed hydrologic modeling. Currently, this GIS tool is developed as an extension of SWAT, which is used as water quantity and quality modeling tool by USDA and EPA. The flexible module based design of this tool also makes it easy to be adapted for other hydrologic models for hydrological modeling and water resources management.

  10. The use of tools for learning science in small groups

    NASA Astrophysics Data System (ADS)

    Valdes, Rosa Maria

    2000-10-01

    "Hands-on" learning through the use of tools or manipulatives representative of science concepts has long been an important component of the middle school science curriculum. However, scarce research exists on the impact of tool use on learning of science concepts, particularly on the processes involved in such learning. This study investigated how the use of tools by students engaged in small group discussion about the concept of electrical resistance and the explanations that accompany such use leads to improved understandings of the concept. Specifically, the main hypothesis of the study was that students who observe explanations by their high-ability peers accompanied by accurate tool use and who are highly engaged in these explanations would show learning gains. Videotaped interactions of students working in small groups to solve tasks on electricity were coded using scales that measured the accuracy of the tool use, the accuracy of the explanations presented, and the level of engagement of target students. The data of 48 students whose knowledge of the concept of resistance was initially low and who also were determined to be low achievers as shown by their scores on a set of pretest, was analyzed. Quantitative and qualitative analyses showed that students who observed their peers give explanations using tools and who were engaged at least moderately made gains in their understandings of resistance. Specifically, the results of regression analyses showed that both the level of accuracy of a high-ability peer's explanation and the target student's level of engagement in the explanation significantly predicted target students' outcome scores. The number of presentations offered by a high-ability peer also significantly predicted outcome scores. Case study analyses of six students found that students who improved their scores the most from pretest to posttest had high-ability peers who tended to be verbal and who gave numerous explanations, whereas students who

  11. Early-Onset Neonatal Sepsis: Still Room for Improvement in Procalcitonin Diagnostic Accuracy Studies

    PubMed Central

    Chiesa, Claudio; Pacifico, Lucia; Osborn, John F.; Bonci, Enea; Hofer, Nora; Resch, Bernhard

    2015-01-01

    Abstract To perform a systematic review assessing accuracy and completeness of diagnostic studies of procalcitonin (PCT) for early-onset neonatal sepsis (EONS) using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative. EONS, diagnosed during the first 3 days of life, remains a common and serious problem. Increased PCT is a potentially useful diagnostic marker of EONS, but reports in the literature are contradictory. There are several possible explanations for the divergent results including the quality of studies reporting the clinical usefulness of PCT in ruling in or ruling out EONS. We systematically reviewed PubMed, Scopus, and the Cochrane Library databases up to October 1, 2014. Studies were eligible for inclusion in our review if they provided measures of PCT accuracy for diagnosing EONS. A data extraction form based on the STARD checklist and adapted for neonates with EONS was used to appraise the quality of the reporting of included studies. We found 18 articles (1998–2014) fulfilling our eligibility criteria which were included in the final analysis. Overall, the results of our analysis showed that the quality of studies reporting diagnostic accuracy of PCT for EONS was suboptimal leaving ample room for improvement. Information on key elements of design, analysis, and interpretation of test accuracy were frequently missing. Authors should be aware of the STARD criteria before starting a study in this field. We welcome stricter adherence to this guideline. Well-reported studies with appropriate designs will provide more reliable information to guide decisions on the use and interpretations of PCT test results in the management of neonates with EONS. PMID:26222858

  12. Visual Impairment Screening Assessment (VISA) tool: pilot validation.

    PubMed

    Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire

    2018-03-06

    To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Accuracy of Pediatric Trauma Field Triage: A Systematic Review.

    PubMed

    van der Sluijs, Rogier; van Rein, Eveline A J; Wijnand, Joep G J; Leenen, Luke P H; van Heijl, Mark

    2018-05-16

    Field triage of pediatric patients with trauma is critical for transporting the right patient to the right hospital. Mortality and lifelong disabilities are potentially attributable to erroneously transporting a patient in need of specialized care to a lower-level trauma center. To quantify the accuracy of field triage and associated diagnostic protocols used to identify children in need of specialized trauma care. MEDLINE, Embase, PsycINFO, and Cochrane Register of Controlled Trials were searched from database inception to November 6, 2017, for studies describing the accuracy of diagnostic tests to identify children in need of specialized trauma care in a prehospital setting. Identified articles with a study population including patients not transported by emergency medical services were excluded. Quality assessment was performed using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-2. After deduplication, 1430 relevant articles were assessed, a full-text review of 38 articles was conducted, and 5 of those articles were included. All studies were observational, published between 1996 and 2017, and conducted in the United States, and data collection was prospective in 1 study. Three different protocols were studied that analyzed a combined total of 1222 children in need of specialized trauma care. One protocol was specifically developed for a pediatric out-of-hospital cohort. The percentage of pediatric patients requiring specialized trauma care in each study varied between 2.6% (110 of 4197) and 54.7% (58 of 106). The sensitivity of the prehospital triage tools ranged from 49.1% to 87.3%, and the specificity ranged from 41.7% to 84.8%. No prehospital triage protocol alone complied with the international standard of 95% or greater sensitivity. Undertriage and overtriage rates, representative of the quality of the full diagnostic strategy to transport a patient to the right hospital, were not reported for inclusive trauma systems or

  14. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  15. MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data

    NASA Astrophysics Data System (ADS)

    Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno

    2017-04-01

    Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.

  16. Diagnostic accuracy of touch imprint cytology for head and neck malignancies: a useful intra-operative tool in resource limited countries.

    PubMed

    Naveed, Hania; Abid, Mariam; Hashmi, Atif Ali; Edhi, Muhammad Muzammamil; Sheikh, Ahmareen Khalid; Mudassir, Ghazala; Khan, Amir

    2017-01-01

    Intraoperative consultation is an important tool for the evaluation of the upper aerodigestive tract (UAT) malignancies. Although frozen section analysis is a preferred method of intra-operative consultation, however in resource limited countries like Pakistan, this facility is not available in most institutes; therefore, we aimed to evaluate the diagnostic accuracy of touch imprint cytology for UAT malignancies using histopathology of the same tissue as gold standard. The study involved 70 cases of UAT lesions operated during the study period. Intraoperatively, after obtaining the fresh biopsy specimen and prior to placing them in fixative, each specimen was imprinted on 4-6 glass slides, fixed immediately in 95% alcohol and stained with Hematoxylin and Eosin stain. After completion of the cytological procedure, the surgical biopsy specimen was processed. The slides of both touch Imprint cytology and histopathology were examined by two consultant histopathologists. The result of touch imprint cytology showed that touch imprint cytology was diagnostic in 68 cases (97.1%), 55 (78.6%) being malignant, 2 cases (2.9%) were suspicious for malignancy, 11 cases (15.7%) were negative for malignancy while 2 cases (2.9%) were false negative. Amongst the 70 cases, 55 cases (78.6%) were malignant showing squamous cell carcinoma in 49 cases (70%), adenoid cystic carcinoma in 2 cases (2.9%), non-Hodgkin lymphoma 2 cases (2.9%), Mucoepidermoid carcinoma 1 case (1.4%), spindle cell sarcoma in 1 case (1.4%). Two cases (2.9%) were suspicious of malignancy showing atypical squamoid cells on touch imprint cytology, while 13 cases (18.6%) were negative for malignancy, which also included 2 false negative cases. The overall diagnostic accuracy of touch imprint cytology came out to be 96.7% with a sensitivity and specificity of 96 and 100%, respectively while PPV and NPV of touch imprint cytology was found to be 100 and 84%, respectively. Our experience in this study has demonstrated

  17. Computational assessment of hemodynamics-based diagnostic tools using a database of virtual subjects: Application to three case studies.

    PubMed

    Willemet, Marie; Vennin, Samuel; Alastruey, Jordi

    2016-12-08

    Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Triage tools for detecting cervical spine injury in pediatric trauma patients.

    PubMed

    Slaar, Annelie; Fockens, M M; Wang, Junfeng; Maas, Mario; Wilson, David J; Goslings, J Carel; Schep, Niels Wl; van Rijn, Rick R

    2017-12-07

    Pediatric cervical spine injury (CSI) after blunt trauma is rare. Nonetheless, missing these injuries can have severe consequences. To prevent the overuse of radiographic imaging, two clinical decision tools have been developed: The National Emergency X-Radiography Utilization Study (NEXUS) criteria and the Canadian C-spine Rule (CCR). Both tools are proven to be accurate in deciding whether or not diagnostic imaging is needed in adults presenting for blunt trauma screening at the emergency department. However, little information is known about the accuracy of these triage tools in a pediatric population. To determine the diagnostic accuracy of the NEXUS criteria and the Canadian C-spine Rule in a pediatric population evaluated for CSI following blunt trauma. We searched the following databases to 24 February 2015: CENTRAL, MEDLINE, MEDLINE Non-Indexed and In-Process Citations, PubMed, Embase, Science Citation Index, ProQuest Dissertations & Theses Database, OpenGrey, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Health Technology Assessment, and the Aggressive Research Intelligence Facility. We included all retrospective and prospective studies involving children following blunt trauma that evaluated the accuracy of the NEXUS criteria, the Canadian C-spine Rule, or both. Plain radiography, computed tomography (CT) or magnetic resonance imaging (MRI) of the cervical spine, and follow-up were considered as adequate reference standards. Two review authors independently assessed the quality of included studies using the QUADAS-2 checklists. They extracted data on study design, patient characteristics, inclusion and exclusion criteria, clinical parameters, target condition, reference standard, and the diagnostic two-by-two table. We calculated and plotted sensitivity, specificity and negative predictive value in

  19. Diagnostic accuracy of scapular physical examination tests for shoulder disorders: a systematic review.

    PubMed

    Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J

    2013-09-01

    To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.

  20. The STARD statement for reporting diagnostic accuracy studies: application to the history and physical examination.

    PubMed

    Simel, David L; Rennie, Drummond; Bossuyt, Patrick M M

    2008-06-01

    The Standards for Reporting of Diagnostic Accuracy (STARD) statement provided guidelines for investigators conducting diagnostic accuracy studies. We reviewed each item in the statement for its applicability to clinical examination diagnostic accuracy research, viewing each discrete aspect of the history and physical examination as a diagnostic test. Nonsystematic review of the STARD statement. Two former STARD Group participants and 1 editor of a journal series on clinical examination research reviewed each STARD item. Suggested interpretations and comments were shared to develop consensus. The STARD Statement applies generally well to clinical examination diagnostic accuracy studies. Three items are the most important for clinical examination diagnostic accuracy studies, and investigators should pay particular attention to their requirements: describe carefully the patient recruitment process, describe participant sampling and address if patients were from a consecutive series, and describe whether the clinicians were masked to the reference standard tests and whether the interpretation of the reference standard test was masked to the clinical examination components or overall clinical impression. The consideration of these and the other STARD items in clinical examination diagnostic research studies would improve the quality of investigations and strengthen conclusions reached by practicing clinicians. The STARD statement provides a very useful framework for diagnostic accuracy studies. The group correctly anticipated that there would be nuances applicable to studies of the clinical examination. We offer guidance that should enhance their usefulness to investigators embarking on original studies of a patient's history and physical examination.

  1. Study on Ultra-deep Azimuthal Electromagnetic Resistivity LWD Tool by Influence Quantification on Azimuthal Depth of Investigation and Real Signal

    NASA Astrophysics Data System (ADS)

    Li, Kesai; Gao, Jie; Ju, Xiaodong; Zhu, Jun; Xiong, Yanchun; Liu, Shuai

    2018-05-01

    This paper proposes a new tool design of ultra-deep azimuthal electromagnetic (EM) resistivity logging while drilling (LWD) for deeper geosteering and formation evaluation, which can benefit hydrocarbon exploration and development. First, a forward numerical simulation of azimuthal EM resistivity LWD is created based on the fast Hankel transform (FHT) method, and its accuracy is confirmed under classic formation conditions. Then, a reasonable range of tool parameters is designed by analyzing the logging response. However, modern technological limitations pose challenges to selecting appropriate tool parameters for ultra-deep azimuthal detection under detectable signal conditions. Therefore, this paper uses grey relational analysis (GRA) to quantify the influence of tool parameters on voltage and azimuthal investigation depth. After analyzing thousands of simulation data under different environmental conditions, the random forest is used to fit data and identify an optimal combination of tool parameters due to its high efficiency and accuracy. Finally, the structure of the ultra-deep azimuthal EM resistivity LWD tool is designed with a theoretical azimuthal investigation depth of 27.42-29.89 m in classic different isotropic and anisotropic formations. This design serves as a reliable theoretical foundation for efficient geosteering and formation evaluation in high-angle and horizontal (HA/HZ) wells in the future.

  2. New tools for evaluating LQAS survey designs

    PubMed Central

    2014-01-01

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the ‘grey region’ are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions. PMID:24528928

  3. New tools for evaluating LQAS survey designs.

    PubMed

    Hund, Lauren

    2014-02-15

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the 'grey region' are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions.

  4. Accuracy of Self-Evaluation in Adults with ADHD: Evidence from a Driving Study

    ERIC Educational Resources Information Center

    Knouse, Laura E.; Bagwell, Catherine L.; Barkley, Russell A.; Murphy, Kevin R.

    2005-01-01

    Research on children with ADHD indicates an association with inaccuracy of self-appraisal. This study examines the accuracy of self-evaluations in clinic-referred adults diagnosed with ADHD. Self-assessments and performance measures of driving in naturalistic settings and on a virtual-reality driving simulator are used to assess accuracy of…

  5. A Comparative Study with RapidMiner and WEKA Tools over some Classification Techniques for SMS Spam

    NASA Astrophysics Data System (ADS)

    Foozy, Cik Feresa Mohd; Ahmad, Rabiah; Faizal Abdollah, M. A.; Chai Wen, Chuah

    2017-08-01

    SMS Spamming is a serious attack that can manipulate the use of the SMS by spreading the advertisement in bulk. By sending the unwanted SMS that contain advertisement can make the users feeling disturb and this against the privacy of the mobile users. To overcome these issues, many studies have proposed to detect SMS Spam by using data mining tools. This paper will do a comparative study using five machine learning techniques such as Naïve Bayes, K-NN (K-Nearest Neighbour Algorithm), Decision Tree, Random Forest and Decision Stumps to observe the accuracy result between RapidMiner and WEKA for dataset SMS Spam UCI Machine Learning repository.

  6. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  7. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  8. The STARD Statement for Reporting Diagnostic Accuracy Studies: Application to the History and Physical Examination

    PubMed Central

    Rennie, Drummond; Bossuyt, Patrick M. M.

    2008-01-01

    Summary Objective The Standards for Reporting of Diagnostic Accuracy (STARD) statement provided guidelines for investigators conducting diagnostic accuracy studies. We reviewed each item in the statement for its applicability to clinical examination diagnostic accuracy research, viewing each discrete aspect of the history and physical examination as a diagnostic test. Setting Nonsystematic review of the STARD statement. Interventions Two former STARD Group participants and 1 editor of a journal series on clinical examination research reviewed each STARD item. Suggested interpretations and comments were shared to develop consensus. Measurements and Main Results The STARD Statement applies generally well to clinical examination diagnostic accuracy studies. Three items are the most important for clinical examination diagnostic accuracy studies, and investigators should pay particular attention to their requirements: describe carefully the patient recruitment process, describe participant sampling and address if patients were from a consecutive series, and describe whether the clinicians were masked to the reference standard tests and whether the interpretation of the reference standard test was masked to the clinical examination components or overall clinical impression. The consideration of these and the other STARD items in clinical examination diagnostic research studies would improve the quality of investigations and strengthen conclusions reached by practicing clinicians. Conclusions The STARD statement provides a very useful framework for diagnostic accuracy studies. The group correctly anticipated that there would be nuances applicable to studies of the clinical examination. We offer guidance that should enhance their usefulness to investigators embarking on original studies of a patient’s history and physical examination. PMID:18347878

  9. SU-G-IeP2-04: Dosimetric Accuracy of a Monte Carlo-Based Tool for Cone-Beam CT Organ Dose Calculation: Validation Against OSL and XRQA2 Film Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesneau, H; Lazaro, D; Blideanu, V

    Purpose: The intensive use of Cone-Beam Computed Tomography (CBCT) during radiotherapy treatments raise some questions about the dose to healthy tissues delivered during image acquisitions. We hence developed a Monte Carlo (MC)-based tool to predict doses to organs delivered by the Elekta XVI kV-CBCT. This work aims at assessing the dosimetric accuracy of the MC tool, in all tissue types. Methods: The kV-CBCT MC model was developed using the PENELOPE code. The beam properties were validated against measured lateral and depth dose profiles in water, and energy spectra measured with a CdTe detector. The CBCT simulator accuracy then required verificationmore » in clinical conditions. For this, we compared calculated and experimental dose values obtained with OSL nanoDots and XRQA2 films inserted in CIRS anthropomorphic phantoms (male, female, and 5-year old child). Measurements were performed at different locations, including bone and lung structures, and for several acquisition protocols: lung, head-and-neck, and pelvis. OSLs and film measurements were corrected when possible for energy dependence, by taking into account for spectral variations between calibration and measurement conditions. Results: Comparisons between measured and MC dose values are summarized in table 1. A mean difference of 8.6% was achieved for OSLs when the energy correction was applied, and 89.3% of the 84 dose points were within uncertainty intervals, including those in bones and lungs. Results with XRQA2 are not as good, because incomplete information about electronic equilibrium in film layers hampered the application of a simple energy correction procedure. Furthermore, measured and calculated doses (Fig.1) are in agreement with the literature. Conclusion: The MC-based tool developed was validated with an extensive set of measurements, and enables the organ dose calculation with accuracy. It can now be used to compute and report doses to organs for clinical cases, and also to drive

  10. Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect

    NASA Astrophysics Data System (ADS)

    Chao, Chia-Chun George

    2009-03-01

    The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.

  11. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  12. QUADAS and STARD: evaluating the quality of diagnostic accuracy studies.

    PubMed

    Oliveira, Maria Regina Fernandes de; Gomes, Almério de Castro; Toscano, Cristiana Maria

    2011-04-01

    To compare the performance of two approaches, one based on the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and another on the Standards for Reporting Studies of Diagnostic Accuracy (STARD), in evaluating the quality of studies validating the OptiMal® rapid malaria diagnostic test. Articles validating the rapid test published until 2007 were searched in the Medline/PubMed database. This search retrieved 13 articles. A combination of 12 QUADAS criteria and three STARD criteria were compared with the 12 QUADAS criteria alone. Articles that fulfilled at least 50% of QUADAS criteria were considered as regular to good quality. Of the 13 articles retrieved, 12 fulfilled at least 50% of QUADAS criteria, and only two fulfilled the STARD/QUADAS criteria combined. Considering the two criteria combination (> 6 QUADAS and > 3 STARD), two studies (15.4%) showed good methodological quality. The articles selection using the proposed combination resulted in two to eight articles, depending on the number of items assumed as cutoff point. The STARD/QUADAS combination has the potential to provide greater rigor when evaluating the quality of studies validating malaria diagnostic tests, given that it incorporates relevant information not contemplated in the QUADAS criteria alone.

  13. Accuracy and reliability of peer assessment of athletic training psychomotor laboratory skills.

    PubMed

    Marty, Melissa C; Henning, Jolene M; Willse, John T

    2010-01-01

    Peer assessment is defined as students judging the level or quality of a fellow student's understanding. No researchers have yet demonstrated the accuracy or reliability of peer assessment in athletic training education. To determine the accuracy and reliability of peer assessment of athletic training students' psychomotor skills. Cross-sectional study. Entry-level master's athletic training education program. First-year (n  =  5) and second-year (n  =  8) students. Participants evaluated 10 videos of a peer performing 3 psychomotor skills (middle deltoid manual muscle test, Faber test, and Slocum drawer test) on 2 separate occasions using a valid assessment tool. Accuracy of each peer-assessment score was examined through percentage correct scores. We used a generalizability study to determine how reliable athletic training students were in assessing a peer performing the aforementioned skills. Decision studies using generalizability theory demonstrated how the peer-assessment scores were affected by the number of participants and number of occasions. Participants had a high percentage of correct scores: 96.84% for the middle deltoid manual muscle test, 94.83% for the Faber test, and 97.13% for the Slocum drawer test. They were not able to reliably assess a peer performing any of the psychomotor skills on only 1 occasion. However, the φ increased (exceeding the 0.70 minimal standard) when 2 participants assessed the skill on 3 occasions (φ  =  0.79) for the Faber test, with 1 participant on 2 occasions (φ  =  0.76) for the Slocum drawer test, and with 3 participants on 2 occasions for the middle deltoid manual muscle test (φ  =  0.72). Although students did not detect all errors, they assessed their peers with an average of 96% accuracy. Having only 1 student assess a peer performing certain psychomotor skills was less reliable than having more than 1 student assess those skills on more than 1 occasion. Peer assessment of psychomotor skills

  14. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  15. Age Differences in Day-To-Day Speed-Accuracy Tradeoffs: Results from the COGITO Study.

    PubMed

    Ghisletta, Paolo; Joly-Burra, Emilie; Aichele, Stephen; Lindenberger, Ulman; Schmiedek, Florian

    2018-04-23

    We examined adult age differences in day-to-day adjustments in speed-accuracy tradeoffs (SAT) on a figural comparison task. Data came from the COGITO study, with over 100 younger and 100 older adults, assessed for over 100 days. Participants were given explicit feedback about their completion time and accuracy each day after task completion. We applied a multivariate vector auto-regressive model of order 1 to the daily mean reaction time (RT) and daily accuracy scores together, within each age group. We expected that participants adjusted their SAT if the two cross-regressive parameters from RT (or accuracy) on day t-1 of accuracy (or RT) on day t were sizable and negative. We found that: (a) the temporal dependencies of both accuracy and RT were quite strong in both age groups; (b) younger adults showed an effect of their accuracy on day t-1 on their RT on day t, a pattern that was in accordance with adjustments of their SAT; (c) older adults did not appear to adjust their SAT; (d) these effects were partly associated with reliable individual differences within each age group. We discuss possible explanations for older adults' reluctance to recalibrate speed and accuracy on a day-to-day basis.

  16. The Efficacy of Violence Prediction: A Meta-Analytic Comparison of Nine Risk Assessment Tools

    ERIC Educational Resources Information Center

    Yang, Min; Wong, Stephen C. P.; Coid, Jeremy

    2010-01-01

    Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their…

  17. Simultaneous inter-arm and inter-leg systolic blood pressure differences to diagnose peripheral artery disease: a diagnostic accuracy study.

    PubMed

    Herráiz-Adillo, Ángel; Soriano-Cano, Alba; Martínez-Hortelano, José Alberto; Garrido-Miguel, Miriam; Mariana-Herráiz, Julián Ángel; Martínez-Vizcaíno, Vicente; Notario-Pacheco, Blanca

    2018-04-01

    Inter-arm systolic blood pressure differences (IASBPD) and inter-leg systolic blood pressure differences (ILSBPD) have arisen as potential tools to detect peripheral artery disease (PAD) and individuals at high cardiovascular risk. This study aims to evaluate the diagnostic accuracy of IASBPD and ILSBPD to detect PAD, and whether IASBPD or ILSBPD improves diagnostic accuracy of the oscillometric ankle-brachial index (ABI). In this prospective study, eligible for inclusion were consecutive adults, with at least one of the following cardiovascular risk factors: diabetes, dyslipidemia, hypertension, smoking habit or age ≥65. IASBPD, ILSBPD and ankle-brachial index (ABI) were measured in all participants through four-limb simultaneous oscillometric measurements and compared with Doppler ABI (reference test, positive cut-off: ≤ 0.9). Of 171 subjects included, PAD was confirmed in 23 and excluded in 148. Thirteen and 38 subjects had IASBPD and ILSBPD ≥10 mmHg, respectively. Pearson correlation with Doppler ABI of IASBPD and ILSBPD was 0.073 (P = .343) and -0.628 (P < .001), respectively. Diagnostic accuracy of an ILSBPD ≥10 mmHg to detect PAD was: sensitivity = 69.6% (95%CI = 48.6-90.5), specificity = 85.1% (79.1-91.2), diagnostic odds ratio (dOR) = 13.1 (4.8-35.5) and area under ROC curve (AUC) = 0.765 (0.616-0.915). IASBPD had an AUC = 0.532 (0.394-0.669), and oscillometric ABI had an AUC = 0.977 (0.950-1.000). The addition of ILSBPD to oscillometric ABI reduced dOR from 174.0 (38.3-789.9) to 34.4 (9.5-125.1). Similarly, the addition of IASBPD reduced dOR to 49.3 (14.6-167.0). In a Primary Care population with ≥1 cardiovascular risk factors, ILSBPD showed acceptable diagnostic accuracy for PAD, whilst IASBPD accuracy was negligible. However, the combination of ILSBPD (or IASBPD) with oscillometric ABI did not improve the ability to detect PAD. Thus, oscillometer ABI seems to be preferable to detect PAD and individuals

  18. The accuracy of references in PhD theses: a case study.

    PubMed

    Azadeh, Fereydoon; Vaez, Reyhaneh

    2013-09-01

    Inaccurate references and citations cause confusion, distrust in the accuracy of a report, waste of time and unnecessary financial charges for libraries, information centres and researchers. The aim of the study was to establish the accuracy of article references in PhD theses from the Tehran and Tabriz Universities of Medical Sciences and their compliance with the Vancouver style. We analysed 357 article references in the Tehran and 347 in the Tabriz. Six bibliographic elements were assessed: authors' names, article title, journal title, publication year, volume and page range. Referencing errors were divided into major and minor. Sixty two percent of references in the Tehran and 53% of those in the Tabriz were erroneous. In total, 164 references in the Tehran and 136 in the Tabriz were complete without error. Of 357 reference articles in the Tehran, 34 (9.8%) were in complete accordance with the Vancouver style, compared with none in the Tabriz. Accuracy of referencing did not differ significantly between the two groups, but compliance with the Vancouver style was significantly better in the Tehran. The accuracy of referencing was not satisfactory in both groups, and students need to gain adequate instruction in appropriate referencing methods. © 2013 The authors. Health Information and Libraries Journal © 2013 Health Libraries Group.

  19. Diagnostic accuracy of a screening electronic alert tool for severe sepsis and septic shock in the emergency department.

    PubMed

    Alsolamy, Sami; Al Salamah, Majid; Al Thagafi, Majed; Al-Dorzi, Hasan M; Marini, Abdellatif M; Aljerian, Nawfal; Al-Enezi, Farhan; Al-Hunaidi, Fatimah; Mahmoud, Ahmed M; Alamry, Ahmed; Arabi, Yaseen M

    2014-12-05

    Early recognition of severe sepsis and septic shock is challenging. The aim of this study was to determine the diagnostic accuracy of an electronic alert system in detecting severe sepsis or septic shock among emergency department (ED) patients. An electronic sepsis alert system was developed as a part of a quality-improvement project for severe sepsis and septic shock. The system screened all adult ED patients for a combination of systemic inflammatory response syndrome and organ dysfunction criteria (hypotension, hypoxemia or lactic acidosis). This study included all patients older than 14 years who presented to the ED of a tertiary care academic medical center from Oct. 1, 2012 to Jan. 31, 2013. As a comparator, emergency medicine physicians or the critical care physician identified the patients with severe sepsis or septic shock. In the ED, vital signs were manually entered into the hospital electronic heath record every hour in the critical care area and every two hours in other areas. We also calculated the time from the alert to the intensive care unit (ICU) referral. Of the 49,838 patients who presented to the ED, 222 (0.4%) were identified to have severe sepsis or septic shock. The electronic sepsis alert had a sensitivity of 93.18% (95% CI, 88.78% - 96.00%), specificity of 98.44 (95% CI, 98.33% - 98.55%), positive predictive value of 20.98% (95% CI, 18.50% - 23.70%) and negative predictive value of 99.97% (95% CI, 99.95% - 99.98%) for severe sepsis and septic shock. The alert preceded ICU referral by a median of 4.02 hours (Q1 - Q3: 1.25-8.55). Our study shows that electronic sepsis alert tool has high sensitivity and specificity in recognizing severe sepsis and septic shock, which may improve early recognition and management.

  20. Method and apparatus for characterizing and enhancing the dynamic performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F

    2013-12-17

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.

  1. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  2. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  3. Brain mechanisms of perceiving tools and imagining tool use acts: a functional MRI study.

    PubMed

    Wadsworth, Heather M; Kana, Rajesh K

    2011-06-01

    The ability to conceptualize and manipulate tools in a complex manner is a distinguishing characteristic of humans, and forms a promising milestone in human evolution. While using tools is a motor act, proposals for executing such acts may be evoked by the mere perception of a tool. Imagining an action using a tool may invoke mental readjustment of body posture, planning motor movements, and matching such plans with the model action. This fMRI study examined the brain response in 32 healthy adults when they either viewed a tool or imagined using it. While both viewing and imagining tasks recruited similar regions, imagined tool use showed greater activation in motor areas, and in areas around the bilateral temporoparietal junction. Viewing tools, on the other hand, produced robust activation in the inferior frontal, occipital, parietal, and ventral temporal areas. Analysis of gender differences indicated males recruiting medial prefrontal and anterior cingulate cortices and females, left supramarginal gyrus and left anterior insula. While tool viewing seems to generate prehensions about using them, the imagined action using a tool mirrored brain responses underlying functional use of it. The findings of this study may suggest that perception and imagination of tools may form precursors to overt actions. Published by Elsevier Ltd.

  4. Delineating Beach and Dune Morphology from Massive Terrestrial Laser Scanning Data Using the Generic Mapping Tools

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Wang, G.; Yan, B.; Kearns, T.

    2016-12-01

    Terrestrial laser scanning (TLS) techniques have been proven to be efficient tools to collect three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, the processing and presenting of massive TLS data is always a challenge for research when targeting a large area with high-resolution. This article introduces a workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas in May 2015 were used for the case study. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth DEMs, and quantifying beach and sand dune features (shoreline, cross-dune section, dune ridge, toe, and volume) are presented in this article. According to this investigation, the accuracy of the laser measurements (distance from the scanner to the targets) is within a couple of centimeters. However, the positional accuracy of TLS points with respect to a global coordinate system is about 5 cm, which is dominated by the accuracy of GPS solutions for obtaining the positions of the scanner and reflector. The accuracy of TLS-derived bare-earth DEM is primarily determined by the size of grid cells and roughness of the terrain surface for the case study. A DEM with grid cells of 4m x 1m (shoreline by cross-shore) provides a suitable spatial resolution and accuracy for deriving major beach and dune features.

  5. PubChem3D: conformer ensemble accuracy

    PubMed Central

    2013-01-01

    Background PubChem is a free and publicly available resource containing substance descriptions and their associated biological activity information. PubChem3D is an extension to PubChem containing computationally-derived three-dimensional (3-D) structures of small molecules. All the tools and services that are a part of PubChem3D rely upon the quality of the 3-D conformer models. Construction of the conformer models currently available in PubChem3D involves a clustering stage to sample the conformational space spanned by the molecule. While this stage allows one to downsize the conformer models to more manageable size, it may result in a loss of the ability to reproduce experimentally determined “bioactive” conformations, for example, found for PDB ligands. This study examines the extent of this accuracy loss and considers its effect on the 3-D similarity analysis of molecules. Results The conformer models consisting of up to 100,000 conformers per compound were generated for 47,123 small molecules whose structures were experimentally determined, and the conformers in each conformer model were clustered to reduce the size of the conformer model to a maximum of 500 conformers per molecule. The accuracy of the conformer models before and after clustering was evaluated using five different measures: root-mean-square distance (RMSD), shape-optimized shape-Tanimoto (STST-opt) and combo-Tanimoto (ComboTST-opt), and color-optimized color-Tanimoto (CTCT-opt) and combo-Tanimoto (ComboTCT-opt). On average, the effect of clustering decreased the conformer model accuracy, increasing the conformer ensemble’s RMSD to the bioactive conformer (by 0.18 ± 0.12 Å), and decreasing the STST-opt, ComboTST-opt, CTCT-opt, and ComboTCT-opt scores (by 0.04 ± 0.03, 0.16 ± 0.09, 0.09 ± 0.05, and 0.15 ± 0.09, respectively). Conclusion This study shows the RMSD accuracy performance of the PubChem3D conformer models is operating as designed. In addition, the effect of PubChem3D

  6. An accuracy evaluation of clinical, arthrometric, and stress-sonographic acute ankle instability examinations.

    PubMed

    Wiebking, Ulrich; Pacha, Tarek Omar; Jagodzinski, Michael

    2015-03-01

    Ankle sprain injuries, often due to lateral ligamentous injury, are the most common sports traumatology conditions. Correct diagnoses require an understanding of the assessment tools with a high degree of diagnostic accuracy. Obviously, there are still no clear consensuses or standard methods to differentiate between a ligament tear and an ankle sprain. In addition to clinical assessments, stress sonography, arthrometer and other methods are often performed simultaneously. These methods are often costly, however, and their accuracy is controversial. The aim of this study was to investigate three different measurement tools that can be used after a lateral ligament lesion of the ankle with injury of the anterior talofibular ligament to determine their diagnostic accuracy. Thirty patients were recruited for this study. The mean patient age was 35±14 years. There were 15 patients with a ligamentous rupture and 15 patients with an ankle sprain. We quantified two devices and one clinical assessment by which we calculated the sensitivity and specifity: Stress sonography according to Hoffmann, an arthrometer to investigate the 100N talar drawer and maximum manual testing and the clinical assessment of the anterior drawer test. A high resolution sonography was used as the gold standard. The ultrasound-assisted gadgetry according to Hoffmann, with a 3mm cut-off value, displayed a sensitivity of 0.27 and a specificity of 0.87. Using a 3.95mm cut-off value, the arthrometer displayed a sensitivity of 0.8 and a specificity of 0.4. The clinical investigation sensitivities and specificities were 0.93 and 0.67, respectively. Different assessment methods for ankle rupture diagnoses are suggested in the literature; however, these methods lack reliable data to set investigation standards. Clinical examination under adequate analgesia seems to remains the most reliable tool to investigate ligamentous ankle lesions. Further clinical studies with higher case numbers are necessary

  7. Accuracy of the 15-item Geriatric Depression Scale (GDS-15) in a community-dwelling oldest-old sample: the Pietà Study.

    PubMed

    Dias, Filipi Leles da Costa; Teixeira, Antônio Lúcio; Guimarães, Henrique Cerqueira; Barbosa, Maira Tonidandel; Resende, Elisa de Paula França; Beato, Rogério Gomes; Carmona, Karoline Carvalho; Caramelli, Paulo

    2017-01-01

    Late-life depression (LLD) is common, but remains underdiagnosed. Validated screening tools for use with the oldest-old in clinical practice are still lacking, particularly in developing countries. To evaluate the accuracy of a screening tool for LLD in a community-dwelling oldest-old sample. We evaluated 457 community-dwelling elderly subjects, aged ≥75 years and without dementia, with the Geriatric Depression Scale (GDS-15). Depression diagnosis was established according to DSM-IV criteria following a structured psychiatric interview with the Mini International Neuropsychiatric Interview (MINI). Fifty-two individuals (11.4%) were diagnosed with major depression. The area under the receiver operating characteristic (ROC) curve was 0.908 (p<0.001). Using a cut-off score of 5/6 (not depressed/depressed), 84 (18.4%) subjects were considered depressed by the GDS-15 (kappa coefficient = 53.8%, p<0.001). The 4/5 cut-off point achieved the best combination of sensitivity (86.5%) and specificity (82.7%) (Youden's index = 0.692), with robust negative (0.9802) and reasonable positive predictive values (0.3819). GDS-15 showed good accuracy as a screening tool for major depression in this community-based sample of low-educated oldest-old individuals. Our findings support the use of the 4/5 cut-off score, which showed the best diagnostic capacity.

  8. SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Wu, G; Gao, Y

    2016-06-15

    Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less

  9. Improvement of CD-SEM mark position measurement accuracy

    NASA Astrophysics Data System (ADS)

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  10. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  11. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  12. A Study of the dimensional accuracy obtained by low cost 3D printing for possible application in medicine

    NASA Astrophysics Data System (ADS)

    Kitsakis, K.; Alabey, P.; Kechagias, J.; Vaxevanidis, N.

    2016-11-01

    Low cost 3D printing' is a terminology that referred to the fused filament fabrication (FFF) technique, which constructs physical prototypes, by depositing material layer by layer using a thermal nozzle head. Nowadays, 3D printing is widely used in medical applications such as tissue engineering as well as supporting tool in diagnosis and treatment in Neurosurgery, Orthopedic and Dental-Cranio-Maxillo-Facial surgery. 3D CAD medical models are usually obtained by MRI or CT scans and then are sent to a 3D printer for physical model creation. The present paper is focused on a brief overview of benefits and limitations of 3D printing applications in the field of medicine as well as on a dimensional accuracy study of low-cost 3D printing technique.

  13. Evaluating the diagnostic accuracy of the Xpert MTB/RIF assay on bronchoalveolar lavage fluid: A retrospective study.

    PubMed

    Lu, Yanjun; Zhu, Yaowu; Shen, Na; Tian, Lei; Sun, Ziyong

    2018-02-08

    Limited data on the diagnostic accuracy of the Xpert MTB/RIF assay using bronchoalveolar lavage fluid from patients with suspected pulmonary tuberculosis (PTB) have been reported in China. Therefore, a retrospective study was designed to evaluate the diagnostic accuracy of this assay. Clinical, radiological, and microbiological characteristics of 238 patients with suspected PTB were reviewed retrospectively. The sensitivity, specificity, positive predictive value, and negative predictive value for the diagnosis of active PTB were calculated for the Xpert MTB/RIF assay using TB culture or final diagnosis based on clinical and radiological evaluation as the reference standard. The sensitivity and specificity of the Xpert MTB/RIF assay were 84.5% and 98.9%, respectively, and those for smear microscopy were 36.2% and 100%, respectively, when compared to the culture method. However, compared with the sensitivity and specificity of final diagnosis based on clinical and radiological evaluation, the sensitivity and specificity of the assay were 72.9% and 98.7%, respectively, which were significantly higher than those for smear microscopy. The Xpert MTB/RIF assay on bronchoalveolar lavage fluid could serve as an additional rapid diagnostic tool for PTB in a high TB-burden country and improve the time to TB treatment initiation in patients with PTB. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. SYRCLE’s risk of bias tool for animal studies

    PubMed Central

    2014-01-01

    Background Systematic Reviews (SRs) of experimental animal studies are not yet common practice, but awareness of the merits of conducting such SRs is steadily increasing. As animal intervention studies differ from randomized clinical trials (RCT) in many aspects, the methodology for SRs of clinical trials needs to be adapted and optimized for animal intervention studies. The Cochrane Collaboration developed a Risk of Bias (RoB) tool to establish consistency and avoid discrepancies in assessing the methodological quality of RCTs. A similar initiative is warranted in the field of animal experimentation. Methods We provide an RoB tool for animal intervention studies (SYRCLE’s RoB tool). This tool is based on the Cochrane RoB tool and has been adjusted for aspects of bias that play a specific role in animal intervention studies. To enhance transparency and applicability, we formulated signalling questions to facilitate judgment. Results The resulting RoB tool for animal studies contains 10 entries. These entries are related to selection bias, performance bias, detection bias, attrition bias, reporting bias and other biases. Half these items are in agreement with the items in the Cochrane RoB tool. Most of the variations between the two tools are due to differences in design between RCTs and animal studies. Shortcomings in, or unfamiliarity with, specific aspects of experimental design of animal studies compared to clinical studies also play a role. Conclusions SYRCLE’s RoB tool is an adapted version of the Cochrane RoB tool. Widespread adoption and implementation of this tool will facilitate and improve critical appraisal of evidence from animal studies. This may subsequently enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies. PMID:24667063

  15. Systematic review of fall risk screening tools for older patients in acute hospitals.

    PubMed

    Matarese, Maria; Ivziku, Dhurata; Bartolozzi, Francesco; Piredda, Michela; De Marinis, Maria Grazia

    2015-06-01

    To determine the most accurate fall risk screening tools for predicting falls among patients aged 65 years or older admitted to acute care hospitals. Falls represent a serious problem in older inpatients due to the potential physical, social, psychological and economic consequences. Older inpatients present with risk factors associated with age-related physiological and psychological changes as well as multiple morbidities. Thus, fall risk screening tools for older adults should include these specific risk factors. There are no published recommendations addressing what tools are appropriate for older hospitalized adults. Systematic review. MEDLINE, CINAHL and Cochrane electronic databases were searched between January 1981-April 2013. Only prospective validation studies reporting sensitivity and specificity values were included. Recommendations of the Cochrane Handbook of Diagnostic Test Accuracy Reviews have been followed. Three fall risk assessment tools were evaluated in seven articles. Due to the limited number of studies, meta-analysis was carried out only for the STRATIFY and Hendrich Fall Risk Model II. In the combined analysis, the Hendrich Fall Risk Model II demonstrated higher sensitivity than STRATIFY, while the STRATIFY showed higher specificity. In both tools, the Youden index showed low prognostic accuracy. The identified tools do not demonstrate predictive values as high as needed for identifying older inpatients at risk for falls. For this reason, no tool can be recommended for fall detection. More research is needed to evaluate fall risk screening tools for older inpatients. © 2014 John Wiley & Sons Ltd.

  16. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  17. Dynamic Development of Complexity and Accuracy: A Case Study in Second Language Academic Writing

    ERIC Educational Resources Information Center

    Rosmawati

    2014-01-01

    This paper reports on the development of complexity and accuracy in English as a Second Language (ESL) academic writing. Although research into complexity and accuracy development in second language (L2) writing has been well established, few studies have assumed the multidimensionality of these two constructs (Norris & Ortega, 2009) or…

  18. Experimental Study of Tool Wear and Grinding Forces During BK-7 Glass Micro-grinding with Modified PCD Tool

    NASA Astrophysics Data System (ADS)

    Pratap, A.; Sahoo, P.; Patra, K.; Dyakonov, A. A.

    2017-09-01

    This study focuses on the improvement in grinding performance of BK-7 glass using polycrystalline diamond micro-tool. Micro-tools are modified using wire EDM and performance of modified tools is compared with that of as received tool. Tool wear of different types of tools are observed. To quantify the tool wear, a method based on weight loss of tool is introduced in this study. Modified tools significantly reduce tool wear in comparison to the normal tool. Grinding forces increase with machining time due to tool wear. However, modified tools produce lesser forces thus can improve life of the PCD micro-grinding tool.

  19. Using a Software Tool in Forecasting: a Case Study of Sales Forecasting Taking into Account Data Uncertainty

    NASA Astrophysics Data System (ADS)

    Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter

    2016-10-01

    Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.

  20. Fall Risk Assessment Tools for Elderly Living in the Community: Can We Do Better?

    PubMed

    Palumbo, Pierpaolo; Palmerini, Luca; Bandinelli, Stefania; Chiari, Lorenzo

    2015-01-01

    Falls are a common, serious threat to the health and self-confidence of the elderly. Assessment of fall risk is an important aspect of effective fall prevention programs. In order to test whether it is possible to outperform current prognostic tools for falls, we analyzed 1010 variables pertaining to mobility collected from 976 elderly subjects (InCHIANTI study). We trained and validated a data-driven model that issues probabilistic predictions about future falls. We benchmarked the model against other fall risk indicators: history of falls, gait speed, Short Physical Performance Battery (Guralnik et al. 1994), and the literature-based fall risk assessment tool FRAT-up (Cattelani et al. 2015). Parsimony in the number of variables included in a tool is often considered a proxy for ease of administration. We studied how constraints on the number of variables affect predictive accuracy. The proposed model and FRAT-up both attained the same discriminative ability; the area under the Receiver Operating Characteristic (ROC) curve (AUC) for multiple falls was 0.71. They outperformed the other risk scores, which reported AUCs for multiple falls between 0.64 and 0.65. Thus, it appears that both data-driven and literature-based approaches are better at estimating fall risk than commonly used fall risk indicators. The accuracy-parsimony analysis revealed that tools with a small number of predictors (~1-5) were suboptimal. Increasing the number of variables improved the predictive accuracy, reaching a plateau at ~20-30, which we can consider as the best trade-off between accuracy and parsimony. Obtaining the values of these ~20-30 variables does not compromise usability, since they are usually available in comprehensive geriatric assessments.

  1. Statistical process control and verifying positional accuracy of a cobra motion couch using step-wedge quality assurance tool.

    PubMed

    Binny, Diana; Lancaster, Craig M; Trapp, Jamie V; Crowe, Scott B

    2017-09-01

    This study utilizes process control techniques to identify action limits for TomoTherapy couch positioning quality assurance tests. A test was introduced to monitor accuracy of the applied couch offset detection in the TomoTherapy Hi-Art treatment system using the TQA "Step-Wedge Helical" module and MVCT detector. Individual X-charts, process capability (cp), probability (P), and acceptability (cpk) indices were used to monitor a 4-year couch IEC offset data to detect systematic and random errors in the couch positional accuracy for different action levels. Process capability tests were also performed on the retrospective data to define tolerances based on user-specified levels. A second study was carried out whereby physical couch offsets were applied using the TQA module and the MVCT detector was used to detect the observed variations. Random and systematic variations were observed for the SPC-based upper and lower control limits, and investigations were carried out to maintain the ongoing stability of the process for a 4-year and a three-monthly period. Local trend analysis showed mean variations up to ±0.5 mm in the three-monthly analysis period for all IEC offset measurements. Variations were also observed in the detected versus applied offsets using the MVCT detector in the second study largely in the vertical direction, and actions were taken to remediate this error. Based on the results, it was recommended that imaging shifts in each coordinate direction be only applied after assessing the machine for applied versus detected test results using the step helical module. User-specified tolerance levels of at least ±2 mm were recommended for a test frequency of once every 3 months to improve couch positional accuracy. SPC enables detection of systematic variations prior to reaching machine tolerance levels. Couch encoding system recalibrations reduced variations to user-specified levels and a monitoring period of 3 months using SPC facilitated in detecting

  2. One process is not enough! A speed-accuracy tradeoff study of recognition memory.

    PubMed

    Boldini, Angela; Russo, Riccardo; Avons, S E

    2004-04-01

    Speed-accuracy tradeoff (SAT) methods have been used to contrast single- and dual-process accounts of recognition memory. In these procedures, subjects are presented with individual test items and are required to make recognition decisions under various time constraints. In this experiment, we presented word lists under incidental learning conditions, varying the modality of presentation and level of processing. At test, we manipulated the interval between each visually presented test item and a response signal, thus controlling the amount of time available to retrieve target information. Study-test modality match had a beneficial effect on recognition accuracy at short response-signal delays (< or =300 msec). Conversely, recognition accuracy benefited more from deep than from shallow processing at study only at relatively long response-signal delays (> or =300 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory.

  3. Advanced Neutronics Tools for BWR Design Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santamarina, A.; Hfaiedh, N.; Letellier, R.

    2006-07-01

    This paper summarizes the developments implemented in the new APOLLO2.8 neutronics tool to meet the required target accuracy in LWR applications, particularly void effects and pin-by-pin power map in BWRs. The Method Of Characteristics was developed to allow efficient LWR assembly calculations in 2D-exact heterogeneous geometry; resonant reaction calculation was improved by the optimized SHEM-281 group mesh, which avoids resonance self-shielding approximation below 23 eV, and the new space-dependent method for resonant mixture that accounts for resonance overlapping. Furthermore, a new library CEA2005, processed from JEFF3.1 evaluations involving feedback from Critical Experiments and LWR P.I.E, is used. The specific '2005-2007more » BWR Plan' settled to demonstrate the validation/qualification of this neutronics tool is described. Some results from the validation process are presented: the comparison of APOLLO2.8 results to reference Monte Carlo TRIPOLI4 results on specific BWR benchmarks emphasizes the ability of the deterministic tool to calculate BWR assembly multiplication factor within 200 pcm accuracy for void fraction varying from 0 to 100%. The qualification process against the BASALA mock-up experiment stresses APOLLO2.8/CEA2005 performances: pin-by-pin power is always predicted within 2% accuracy, reactivity worth of B4C or Hf cruciform control blade, as well as Gd pins, is predicted within 1.2% accuracy. (authors)« less

  4. An update of the appraisal of the accuracy and utility of cervical discography in chronic neck pain.

    PubMed

    Onyewu, Obi; Manchikanti, Laxmaiah; Falco, Frank J E; Singh, Vijay; Geffert, Stephanie; Helm, Standiford; Cohen, Steven P; Hirsch, Joshua A

    2012-01-01

    ) level of evidence criteria, this systematic review indicates the strength of evidence is limited for the diagnostic accuracy of cervical discography. Limitations include a paucity of literature, poor methodological quality, and very few studies performed utilizing International Association for the Study of Pain (IASP) criteria. There is limited evidence for the diagnostic accuracy of cervical discography. Nevertheless, in the absence of any other means to establish a relationship between pathology and symptoms, cervical provocation discography may be an important evaluation tool in certain contexts to identify a subset of patients with chronic neck pain secondary to intervertebral disc disorders. Based on the current systematic review, cervical provocation discography performed according to the IASP criteria with control disc(s), and a minimum provoked pain intensity of 7 of 10, or at least 70% reproduction of worst pain (i.e. worst spontaneous pain of 7 = 7 x 70% = 5), may be a useful tool for evaluating chronic pain and cervical disc abnormalities in a small proportion of patients.

  5. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  6. Chapter 13 - Perspectives on LANDFIRE Prototype Project Accuracy Assessment

    Treesearch

    James Vogelmann; Zhiliang Zhu; Jay Kost; Brian Tolk; Donald Ohlen

    2006-01-01

    The purpose of this chapter is to provide a general overview of the many aspects of accuracy assessment pertinent to the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project). The LANDFIRE Prototype formed a large and complex research and development project with many broad-scale data sets and products developed throughout...

  7. The Diagnostic Accuracy of the Berg Balance Scale in Predicting Falls.

    PubMed

    Park, Seong-Hi; Lee, Young-Shin

    2017-11-01

    This study aimed to evaluate the predictive validity of the Berg Balance Scale (BBS) as a screening tool for fall risks among those with varied levels of balance. A total of 21 studies reporting predictive validity of the BBS of fall risk were meta-analyzed. With regard to the overall predictive validity of the BBS, the pooled sensitivity and specificity were 0.72 and 0.73, respectively; the accuracy curve area was 0.84. The findings showed statistical heterogeneity among studies. Among the sub-groups, the age group of those younger than 65 years, those with neuromuscular disease, those with 2+ falls, and those with a cutoff point of 45 to 49 showed better sensitivity with statistically less heterogeneity. The empirical evidence indicates that the BBS is a suitable tool to screen for the risk of falls and shows good predictability when used with the appropriate criteria and applied to those with neuromuscular disease.

  8. Validation studies in forensic odontology - Part 1: Accuracy of radiographic matching.

    PubMed

    Page, Mark; Lain, Russell; Kemp, Richard; Taylor, Jane

    2018-05-01

    As part of a series of studies aimed at validating techniques in forensic odontology, this study aimed to validate the accuracy of ante-mortem (AM)/postmortem (PM) radiographic matching by dentists and forensic odontologists. This study used a web-based interface with 50 pairs of AM and PM radiographs from real casework, at varying degrees of difficulty. Participants were shown both radiographs as a pair and initially asked to decide if they represented the same individual using a yes/no binary choice forced-decision. Participants were asked to assess their level of confidence in their decision, and to make a conclusion using one of the ABFO (American Board of Forensic Odontology), INTERPOL (International Criminal Police Organisation) and DVISys™ (DVI System International, Plass Data Software) identification scale degrees. The mean false-positive rate using the binary choice scale was 12%. Overall accuracy was 89% using this model, however, 13% of participants scored below 80%. Only 25% of participants accurately answered yes or no >90% of the time, with no individual making the correct yes/no decision for all 50 pairs of radiographs. Non-odontologists (lay participants) scored poorly, with a mean accuracy of only 60%. Use of the graded ABFO, DVISYS and INTERPOL scales resulted in general improvements in performance, with the false-positive and false-negative rates falling to approximately 2% overall. Inter-examiner agreement in assigning scale degrees was good (ICC=0.64), however there was little correlation between confidence and both accuracy or agreement among practitioners. These results suggest that use of a non-binary scale is supported over a match/non-match call as it reduces the frequency of false positives and negatives. The use of the terms "possible" and "insufficient information" in the same scale appears to create confusion, reducing inter-examiner agreement. The lack of agreement between higher-performing and lower-performing groups suggests that

  9. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  10. Accuracy of ECG interpretation in competitive athletes: the impact of using standised ECG criteria.

    PubMed

    Drezner, Jonathan A; Asif, Irfan M; Owens, David S; Prutkin, Jordan M; Salerno, Jack C; Fean, Robyn; Rao, Ashwin L; Stout, Karen; Harmon, Kimberly G

    2012-04-01

    Interpretation of ECGs in athletes is complicated by physiological changes related to training. The purpose of this study was to determine the accuracy of ECG interpretation in athletes among different physician specialties, with and without use of a standised ECG criteria tool. Physicians were asked to interpret 40 ECGs (28 normal ECGs from college athletes randomised with 12 abnormal ECGs from individuals with known ciovascular pathology) and classify each ECG as (1) 'normal or variant--no further evaluation and testing needed' or (2) 'abnormal--further evaluation and testing needed.' After reading the ECGs, participants received a two-page ECG criteria tool to guide interpretation of the ECGs again. A total of 60 physicians participated: 22 primary care (PC) residents, 16 PC attending physicians, 12 sports medicine (SM) physicians and 10 ciologists. At baseline, the total number of ECGs correctly interpreted was PC residents 73%, PC attendings 73%, SM physicians 78% and ciologists 85%. With use of the ECG criteria tool, all physician groups significantly improved their accuracy (p<0.0001): PC residents 92%, PC attendings 90%, SM physicians 91% and ciologists 96%. With use of the ECG criteria tool, specificity improved from 70% to 91%, sensitivity improved from 89% to 94% and there was no difference comparing ciologists versus all other physicians (p=0.053). Providing standised criteria to assist ECG interpretation in athletes significantly improves the ability to accurately distinguish normal from abnormal findings across physician specialties, even in physicians with little or no experience.

  11. Feasibility and Accuracy of Digitizing Edentulous Maxillectomy Defects: A Comparative Study.

    PubMed

    Elbashti, Mahmoud E; Hattori, Mariko; Patzelt, Sebastian Bm; Schulze, Dirk; Sumita, Yuka I; Taniguchi, Hisashi

    The aim of this study was to evaluate the feasibility and accuracy of using an intraoral scanner to digitize edentulous maxillectomy defects. A total of 20 maxillectomy models with two defect types were digitized using cone beam computed tomography. Conventional and digital impressions were made using silicone impression material and a laboratory optical scanner as well as a chairside intraoral scanner. The 3D datasets were analyzed using 3D evaluation software. Two-way analysis of variance revealed no interaction between defect types and impression methods, and the accuracy of the impression methods was significantly different (P = .0374). Digitizing edentulous maxillectomy defect models using a chairside intraoral scanner appears to be feasible and accurate.

  12. Assessment of Delivery Accuracy in an Operational-Like Environment

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Wynnyk, Mitch

    2016-01-01

    In order to enable arrival management concepts and solutions in a Next Generation Air Transportation System (NextGen) environment, ground-based sequencing and scheduling functions were developed to support metering operations in the National Airspace System. These sequencing and scheduling tools are designed to assist air traffic controllers in developing an overall arrival strategy, from enroute down to the terminal area boundary. NASA developed a ground system concept and protoype capability called Terminal Sequencing and Spacing (TSAS) to extend metering operations into the terminal area to the runway. To demonstrate the use of these scheduling and spacing tools in an operational-like environment, the FAA, NASA, and MITRE conducted an Operational Integration Assessment (OIA) of a prototype TSAS system at the FAA's William J. Hughes Technical Center (WJHTC). This paper presents an analysis of the arrival management strategies utilized and delivery accuracy achieved during the OIA. The analysis demonstrates how en route preconditioning, in various forms, and schedule disruptions impact delivery accuracy. As the simulation spanned both enroute and terminal airspace, the use of Ground Interval Management - Spacing (GIM-S) enroute speed advisories was investigated. Delivery accuracy was measured as the difference between the Scheduled Time of Arrival (STA) and the Actual Time of Arrival (ATA). The delivery accuracy was computed across all runs conducted during the OIA, which included deviations from nominal operations which are known to commonly occur in real operations, such as schedule changes and missed approaches. Overall, 83% of all flights were delivered into the terminal airspace within +/- 30 seconds of their STA and 94% of flights were delivered within +/- 60 seconds. The meter fix delivery accuracy standard deviation was found to be between 36 and 55 seconds across all arrival procedures. The data also showed when schedule disruptions were excluded, the

  13. Seismicity map tools for earthquake studies

    NASA Astrophysics Data System (ADS)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  14. Method for machining steel with diamond tools

    DOEpatents

    Casstevens, J.M.

    1984-01-01

    The present invention is directed to a method for machine optical quality finishes and contour accuracies of workpieces of carbon-containing metals such as steel with diamond tooling. The wear rate of the diamond tooling is significantly reduced by saturating the atmosphere at the interface of the workpiece and the diamond tool with a gaseous hydrocarbon during the machining operation. The presence of the gaseous hydrocarbon effectively eliminates the deterioration of the diamond tool by inhibiting or preventing the conversion of the diamond carbon to graphite carbon at the point of contact between the cutting tool and the workpiece.

  15. Method for machining steel with diamond tools

    DOEpatents

    Casstevens, John M.

    1986-01-01

    The present invention is directed to a method for machining optical quality inishes and contour accuracies of workpieces of carbon-containing metals such as steel with diamond tooling. The wear rate of the diamond tooling is significantly reduced by saturating the atmosphere at the interface of the workpiece and the diamond tool with a gaseous hydrocarbon during the machining operation. The presence of the gaseous hydrocarbon effectively eliminates the deterioration of the diamond tool by inhibiting or preventing the conversion of the diamond carbon to graphite carbon at the point of contact between the cutting tool and the workpiece.

  16. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  17. Evaluating the decision accuracy and speed of clinical data visualizations.

    PubMed

    Pieczkiewicz, David S; Finkelstein, Stanley M

    2010-01-01

    Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.

  18. Noise pollution mapping approach and accuracy on landscape scales.

    PubMed

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  20. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately

  1. Diagnostic accuracy of translucency rendering to differentiate polyps from pseudopolyps at 3D endoluminal CT colonography: a feasibility study.

    PubMed

    Guerrisi, A; Marin, D; Laghi, A; Di Martino, M; Iafrate, F; Iannaccone, R; Catalano, C; Passariello, R

    2010-08-01

    The aim of this study was to assess the accuracy of translucency rendering (TR) in computed tomographic (CT) colonography without cathartic preparation using primary 3D reading. From 350 patients with 482 endoscopically verified polyps, 50 pathologically proven polyps and 50 pseudopolyps were retrospectively examined. For faecal tagging, all patients ingested 140 ml of orally administered iodinated contrast agent (diatrizoate meglumine and diatrizoate sodium) at meals 48 h prior to CT colonography examination and two h prior to scanning. CT colonography was performed using a 64-section CT scanner. Colonoscopy with segmental unblinding was performed within 2 weeks after CT. Three independent radiologists retrospectively evaluated TRCT clonographic images using a dedicated software package (V3D-Colon System). To enable size-dependent statistical analysis, lesions were stratified into the following size categories: small (< or =5 mm), intermediate (6-9 mm), and large (> or =10 mm). Overall average TR sensitivity for polyp characterisation was 96.6%, and overall average specificity for pseudopolyp characterisation was 91.3%. Overall average diagnostic accuracy (area under the curve) of TR for characterising colonic lesions was 0.97. TR is an accurate tool that facilitates interpretation of images obtained with a primary 3D analysis, thus enabling easy differentiation of polyps from pseudopolyps.

  2. Screening for Dyslexia in French-Speaking University Students: An Evaluation of the Detection Accuracy of the Alouette Test.

    PubMed

    Cavalli, Eddy; Colé, Pascale; Leloup, Gilles; Poracchia-George, Florence; Sprenger-Charolles, Liliane; El Ahmadi, Abdessadek

    Developmental dyslexia is a lifelong impairment affecting 5% to 10% of the population. In French-speaking countries, although a number of standardized tests for dyslexia in children are available, tools suitable to screen for dyslexia in adults are lacking. In this study, we administered the Alouette reading test to a normative sample of 164 French university students without dyslexia and a validation sample of 83 students with dyslexia. The Alouette reading test is designed to screen for dyslexia in children, since it taps skills that are typically deficient in dyslexia (i.e., phonological skills). However, the test's psychometric properties have not previously been available, and it is not standardized for adults. The results showed that, on the Alouette test, dyslexic readers were impaired on measures of accuracy, speed, and efficiency (accuracy/reading time). We also found significant correlations between the Alouette reading efficiency and phonological efficiency scores. Finally, in terms of the Alouette test, speed-accuracy trade-offs were found in both groups, and optimal cutoff scores were determined with receiver operator characteristic curves analysis, yielding excellent discriminatory power, with 83.1% sensitivity and 100% specificity for reading efficiency. Thus, this study supports the Alouette test as a sensitive and specific screening tool for adults with dyslexia.

  3. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  4. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1982-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.

  5. Hand-Held Electronic Gap-Measuring Tools

    NASA Technical Reports Server (NTRS)

    Sugg, F. E.; Thompson, F. W.; Aragon, L. A.; Harrington, D. B.

    1985-01-01

    Repetitive measurements simplified by tool based on LVDT operation. With fingers in open position, Gap-measuring tool rests on digital readout instrument. With fingers inserted in gap, separation alters inductance of linear variable-differential transformer in plastic handle. Originally developed for measuring gaps between surface tiles of Space Shuttle orbiter, tool reduces measurement time from 20 minutes per tile to 2 minutes. Also reduces possibility of damage to tiles during measurement. Tool has potential applications in mass production; helps ensure proper gap dimensions in assembly of refrigerator and car doors and also used to measure dimensions of components and to verify positional accuracy of components during progressive assembly operations.

  6. Cardiovascular risk prediction tools for populations in Asia.

    PubMed

    Barzi, F; Patel, A; Gu, D; Sritara, P; Lam, T H; Rodgers, A; Woodward, M

    2007-02-01

    Cardiovascular risk equations are traditionally derived from the Framingham Study. The accuracy of this approach in Asian populations, where resources for risk factor measurement may be limited, is unclear. To compare "low-information" equations (derived using only age, systolic blood pressure, total cholesterol and smoking status) derived from the Framingham Study with those derived from the Asian cohorts, on the accuracy of cardiovascular risk prediction. Separate equations to predict the 8-year risk of a cardiovascular event were derived from Asian and Framingham cohorts. The performance of these equations, and a subsequently "recalibrated" Framingham equation, were evaluated among participants from independent Chinese cohorts. Six cohort studies from Japan, Korea and Singapore (Asian cohorts); six cohort studies from China; the Framingham Study from the US. 172,077 participants from the Asian cohorts; 25,682 participants from Chinese cohorts and 6053 participants from the Framingham Study. In the Chinese cohorts, 542 cardiovascular events occurred during 8 years of follow-up. Both the Asian cohorts and the Framingham equations discriminated cardiovascular risk well in the Chinese cohorts; the area under the receiver-operator characteristic curve was at least 0.75 for men and women. However, the Framingham risk equation systematically overestimated risk in the Chinese cohorts by an average of 276% among men and 102% among women. The corresponding average overestimation using the Asian cohorts equation was 11% and 10%, respectively. Recalibrating the Framingham risk equation using cardiovascular disease incidence from the non-Chinese Asian cohorts led to an overestimation of risk by an average of 4% in women and underestimation of risk by an average of 2% in men. A low-information Framingham cardiovascular risk prediction tool, which, when recalibrated with contemporary data, is likely to estimate future cardiovascular risk with similar accuracy in Asian

  7. Matters of accuracy and conventionality: prior accuracy guides children's evaluations of others' actions.

    PubMed

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-03-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.

  8. The Accuracy of Preoperative Rigid Stroboscopy in the Evaluation of Voice Disorders in Children.

    PubMed

    Mansour, Jobran; Amir, Ofer; Sagiv, Doron; Alon, Eran E; Wolf, Michael; Primov-Fever, Adi

    2017-07-01

    Stroboscopy is considered the most appropriate tool for evaluating the function of the vocal folds but may harbor significant limitations in children. Still, direct laryngoscopy (DL), under general anesthesia, is regarded the "gold standard" for establishing a diagnosis of vocal fold pathology. The aim of the study is to examine the accuracy of preoperative rigid stroboscopy in children with voice disorders. This is a retrospective study. A retrospective study was conducted on a cohort of 39 children with dysphonia, aged 4 to 18 years, who underwent DL. Twenty-six children underwent rigid stroboscopy (RS) prior to surgery and 13 children underwent fiber-optic laryngoscopy. The preoperative diagnoses were matched with intraoperative (DL) findings. DL was found to contradict preoperative evaluations in 20 out of 39 children (51%) and in 26 out of 53 of the findings (49%). Overdiagnosis of cysts and underdiagnosis of sulci were noted in RS compared to DL. The overall rate of accuracy for RS was 64%. The accuracy of rigid stroboscopy in the evaluation of children with voice disorders was found to be similar with previous reports in adults. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Dementia Screening Accuracy is Robust to Premorbid IQ Variation: Evidence from the Addenbrooke's Cognitive Examination-III and the Test of Premorbid Function.

    PubMed

    Stott, Joshua; Scior, Katrina; Mandy, William; Charlesworth, Georgina

    2017-01-01

    Scores on cognitive screening tools for dementia are associated with premorbid IQ. It has been suggested that screening scores should be adjusted accordingly. However, no study has examined whether premorbid IQ variation affects screening accuracy. To investigate whether the screening accuracy of a widely used cognitive screening tool for dementia, the Addenbrooke's cognitive examination-III (ACE-III), is improved by adjusting for premorbid IQ. 171 UK based adults (96 memory service attendees diagnosed with dementia and 75 healthy volunteers over the age of 65 without subjective memory impairments) completed the ACE-III and the Test of Premorbid Function (TOPF). The difference in screening performance between the ACE-III alone and the ACE-III adjusted for TOPF was assessed against a reference standard; the presence or absence of a diagnosis of dementia (Alzheimer's disease, vascular dementia, or others). Logistic regression and receiver operating curve analyses indicated that the ACE-III has excellent screening accuracy (93% sensitivity, 94% specificity) in distinguishing those with and without a dementia diagnosis. Although ACE-III scores were associated with TOPF scores, TOPF scores may be affected by having dementia and screening accuracy was not improved by accounting for premorbid IQ, age, or years of education. ACE-III screening accuracy is high and screening performance is robust to variation in premorbid IQ, age, and years of education. Adjustment of ACE-III cut-offs for premorbid IQ is not recommended in clinical practice. The analytic strategy used here may be useful to assess the impact of premorbid IQ on other screening tools.

  10. Diagnostic Tools for Acute Anterior Cruciate Ligament Injury: GNRB, Lachman Test, and Telos.

    PubMed

    Ryu, Seung Min; Na, Ho Dong; Shon, Oog Jin

    2018-06-01

    The purpose of this study is to compare the accuracy of the GNRB arthrometer (Genourob), Lachman test, and Telos device (GmbH) in acute anterior cruciate ligament (ACL) injuries and to evaluate the accuracy of each diagnostic tool according to the length of time from injury to examination. From September 2015 to September 2016, 40 cases of complete ACL rupture were reviewed. We divided the time from injury to examination into three periods of 10 days each and analyzed the diagnostic tools according to the time frame. An analysis of the area under the curve (AUC) of a receiver operating characteristic curve showed that all diagnostic tools were fairly informative. The GNRB showed a higher AUC than other diagnostic tools. In 10 cases assessed within 10 days after injury, the GNRB showed statistically significant side-to-side difference in laxity (p<0.001), whereas the Telos test and Lachman test did not show significantly different laxity (p=0.541 and p=0.413, respectively). All diagnostic values of the GNRB were better than other diagnostic tools in acute ACL injuries. The GNRB was more effective in acute ACL injuries examined within 10 days of injury. The GNRB arthrometer can be a useful diagnostic tool for acute ACL injuries.

  11. Diagnostic accuracy of physical examination for anterior knee instability: a systematic review.

    PubMed

    Leblanc, Marie-Claude; Kowalczuk, Marcin; Andruszkiewicz, Nicole; Simunovic, Nicole; Farrokhyar, Forough; Turnbull, Travis Lee; Debski, Richard E; Ayeni, Olufemi R

    2015-10-01

    Determining diagnostic accuracy of Lachman, pivot shift and anterior drawer tests versus gold standard diagnosis (magnetic resonance imaging or arthroscopy) for anterior cruciate ligament (ACL) insufficiency cases. Secondarily, evaluating effects of: chronicity, partial rupture, awake versus anaesthetized evaluation. Searching MEDLINE, EMBASE and PubMed identified studies on diagnostic accuracy for ACL insufficiency. Studies identification and data extraction were performed in duplicate. Quality assessment used QUADAS tool, and statistical analyses were completed for pooled sensitivity and specificity. Eight studies were included. Given insufficient data, pooled analysis was only possible for sensitivity on Lachman and pivot shift test. During awake evaluation, sensitivity for the Lachman test was 89 % (95 % CI 0.76, 0.98) for all rupture types, 96 % (95 % CI 0.90, 1.00) for complete ruptures and 68 % (95 % CI 0.25, 0.98) for partial ruptures. For pivot shift in awake evaluation, results were 79 % (95 % CI 0.63, 0.91) for all rupture types, 86 % (95 % CI 0.68, 0.99) for complete ruptures and 67 % (95 % CI 0.47, 0.83) for partial ruptures. Decreased sensitivity of Lachman and pivot shift tests for partial rupture cases and for awake patients raised suspicions regarding the accuracy of these tests for diagnosis of ACL insufficiency. This may lead to further research aiming to improve the understanding of the true accuracy of these physical diagnostic tests and increase the reliability of clinical investigation for this pathology. IV.

  12. An Observational Study to Evaluate the Usability and Intent to Adopt an Artificial Intelligence-Powered Medication Reconciliation Tool.

    PubMed

    Long, Ju; Yuan, Michael Juntao; Poonawala, Robina

    2016-05-16

    Medication reconciliation (the process of creating an accurate list of all medications a patient is taking) is a widely practiced procedure to reduce medication errors. It is mandated by the Joint Commission and reimbursed by Medicare. Yet, in practice, medication reconciliation is often not effective owing to knowledge gaps in the team. A promising approach to improve medication reconciliation is to incorporate artificial intelligence (AI) decision support tools into the process to engage patients and bridge the knowledge gap. The aim of this study was to improve the accuracy and efficiency of medication reconciliation by engaging the patient, the nurse, and the physician as a team via an iPad tool. With assistance from the AI agent, the patient will review his or her own medication list from the electronic medical record (EMR) and annotate changes, before reviewing together with the physician and making decisions on the shared iPad screen. In this study, we developed iPad-based software tools, with AI decision support, to engage patients to "self-service" medication reconciliation and then share the annotated reconciled list with the physician. To evaluate the software tool's user interface and workflow, a small number of patients (10) in a primary care clinic were recruited, and they were observed through the whole process during a pilot study. The patients are surveyed for the tool's usability afterward. All patients were able to complete the medication reconciliation process correctly. Every patient found at least one error or other issues with their EMR medication lists. All of them reported that the tool was easy to use, and 8 of 10 patients reported that they will use the tool in the future. However, few patients interacted with the learning modules in the tool. The physician and nurses reported the tool to be easy-to-use, easy to integrate into existing workflow, and potentially time-saving. We have developed a promising tool for a new approach to

  13. Study on the position accuracy of a mechanical alignment system

    NASA Astrophysics Data System (ADS)

    Cai, Yimin

    In this thesis, we investigated the precision level and established the baseline achieved by a mechanical alignment system using datums and reference surfaces. The factors which affect the accuracy of mechanical alignment system were studied and methodology was developed to suppress these factors so as to reach its full potential precision. In order to characterize the mechanical alignment system quantitatively, a new optical position monitoring system by using quadrant detectors has been developed in this thesis, it can monitor multi-dimensional degrees of mechanical workpieces in real time with high precision. We studied the noise factors inside the system and optimized the optical system. Based on the fact that one of the major limiting noise factors is the shifting of the laser beam, a noise cancellation technique has been developed successfully to suppress this noise, the feasibility of an ultra high resolution (<20 A) for displacement monitoring has been demonstrated. Using the optical position monitoring system, repeatability experiment of the mechanical alignment system has been conducted on different kinds of samples including steel, aluminum, glass and plastics with the same size 100mm x 130mm. The alignment accuracy was studied quantitatively rather than qualitatively before. In a controlled environment, the alignment precision can be improved 5 folds by securing the datum without other means of help. The alignment accuracy of an aluminum workpiece having reference surface by milling is about 3 times better than by shearing. Also we have found that sample material can have fairly significant effect on the alignment precision of the system. Contamination trapped between the datum and reference surfaces in mechanical alignment system can cause errors of registration or reduce the level of manufacturing precision. In the thesis, artificial and natural dust particles were used to simulate the real situations and their effects on system precision have been

  14. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1983-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605

  15. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects

  16. A computer-aided ECG diagnostic tool.

    PubMed

    Oweis, Rami; Hijazi, Lily

    2006-03-01

    Jordan lacks companies that provide local medical facilities with products that are of help in daily performed medical procedures. Because of this, the country imports most of these expensive products. Consequently, a local interest in producing such products has emerged and resulted in serious research efforts in this area. The main goal of this paper is to provide local (the north of Jordan) clinics with a computer-aided electrocardiogram (ECG) diagnostic tool in an attempt to reduce time and work demands for busy physicians especially in areas where only one general medicine doctor is employed and a bulk of cases are to be diagnosed. The tool was designed to help in detecting heart defects such as arrhythmias and heart blocks using ECG signal analysis depending on the time-domain representation, the frequency-domain spectrum, and the relationship between them. The application studied here represents a state of the art ECG diagnostic tool that was designed, implemented, and tested in Jordan to serve wide spectrum of population who are from poor families. The results of applying the tool on randomly selected representative sample showed about 99% matching with those results obtained at specialized medical facilities. Costs, ease of interface, and accuracy indicated the usefulness of the tool and its use as an assisting diagnostic tool.

  17. Fuzzy regression modeling for tool performance prediction and degradation detection.

    PubMed

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  18. The methodological quality of diagnostic test accuracy studies for musculoskeletal conditions can be improved.

    PubMed

    Henschke, Nicholas; Keuerleber, Julia; Ferreira, Manuela; Maher, Christopher G; Verhagen, Arianne P

    2014-04-01

    To provide an overview of reporting and methodological quality in diagnostic test accuracy (DTA) studies in the musculoskeletal field and evaluate the use of the QUality Assessment of Diagnostic Accuracy Studies (QUADAS) checklist. A literature review identified all systematic reviews that evaluated the accuracy of clinical tests to diagnose musculoskeletal conditions and used the QUADAS checklist. Two authors screened all identified reviews and extracted data on the target condition, index tests, reference standard, included studies, and QUADAS items. A descriptive analysis of the QUADAS checklist was performed, along with Rasch analysis to examine the construct validity and internal reliability. A total of 19 systematic reviews were included, which provided data on individual items of the QUADAS checklist for 392 DTA studies. In the musculoskeletal field, uninterpretable or intermediate test results are commonly not reported, with 175 (45%) studies scoring "no" to this item. The proportion of studies fulfilling certain items varied from 22% (item 11) to 91% (item 3). The interrater reliability of the QUADAS checklist was good and Rasch analysis showed excellent construct validity and internal consistency. This overview identified areas where the reporting and performance of diagnostic studies within the musculoskeletal field can be improved. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Accuracy of routine magnetic resonance imaging in meniscal and ligamentous injuries of the knee: comparison with arthroscopy

    PubMed Central

    Behairy, Noha H.; Dorgham, Mohsen A.

    2008-01-01

    The aim of this study was to detect the accuracy of routine magnetic resonance imaging (MRI) done in different centres and its agreement with arthroscopy in meniscal and ligamentous injuries of the knee. We prospectively examined 70 patients ranging in age between 22 and 59 years. History taking, plain X-ray, clinical examination, routine MRI and arthroscopy were done for all patients. Sensitivity, specificity, accuracy, positive and negative predictive values, P value and kappa agreement measures were calculated. We found a sensitivity of 47 and 100%, specificity of 95 and 75% and accuracy of 73 and 78.5%, respectively, for the medial and lateral meniscus. A sensitivity of 77.8%, specificity of 100% and accuracy of 94% was noted for the anterior cruciate ligament (ACL). We found good kappa agreements (0.43 and 0.45) for both menisci and excellent agreement (0.84) for the ACL. MRI shows high accuracy and should be used as the primary diagnostic tool for selection of candidates for arthroscopy. Level of evidence: 4. PMID:18506445

  20. Evaluating online diagnostic decision support tools for the clinical setting.

    PubMed

    Pryor, Marie; White, David; Potter, Bronwyn; Traill, Roger

    2012-01-01

    Clinical decision support tools available at the point of care are an effective adjunct to support clinicians to make clinical decisions and improve patient outcomes. We developed a methodology and applied it to evaluate commercially available online clinical diagnostic decision support (DDS) tools for use at the point of care. We identified 11 commercially available DDS tools and assessed these against an evaluation instrument that included 6 categories; general information, content, quality control, search, clinical results and other features. We developed diagnostically challenging clinical case scenarios based on real patient experience that were commonly missed by junior medical staff. The evaluation was divided into 2 phases; an initial evaluation of all identified and accessible DDS tools conducted by the Clinical Information Access Portal (CIAP) team and a second phase that further assessed the top 3 tools identified in the initial evaluation phase. An evaluation panel consisting of senior and junior medical clinicians from NSW Health conducted the second phase. Of the eleven tools that were assessed against the evaluation instrument only 4 tools completely met the DDS definition that was adopted for this evaluation and were able to produce a differential diagnosis. From the initial phase of the evaluation 4 DDS tools scored 70% or more (maximum score 96%) for the content category, 8 tools scored 65% or more (maximum 100%) for the quality control category, 5 tools scored 65% or more (maximum 94%) for the search category, and 4 tools score 70% or more (maximum 81%) for the clinical results category. The second phase of the evaluation was focused on assessing diagnostic accuracy for the top 3 tools identified in the initial phase. Best Practice ranked highest overall against the 6 clinical case scenarios used. Overall the differentiating factor between the top 3 DDS tools was determined by diagnostic accuracy ranking, ease of use and the confidence and

  1. Research of a smart cutting tool based on MEMS strain gauge

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zhao, Y. L.; Shao, YW; Hu, T. J.; Zhang, Q.; Ge, X. H.

    2018-03-01

    Cutting force is an important factor that affects machining accuracy, cutting vibration and tool wear. Machining condition monitoring by cutting force measurement is a key technology for intelligent manufacture. Current cutting force sensors exist problems of large volume, complex structure and poor compatibility in practical application, for these problems, a smart cutting tool is proposed in this paper for cutting force measurement. Commercial MEMS (Micro-Electro-Mechanical System) strain gauges with high sensitivity and small size are adopted as transducing element of the smart tool, and a structure optimized cutting tool is fabricated for MEMS strain gauge bonding. Static calibration results show that the developed smart cutting tool is able to measure cutting forces in both X and Y directions, and the cross-interference error is within 3%. Its general accuracy is 3.35% and 3.27% in X and Y directions, and sensitivity is 0.1 mV/N, which is very suitable for measuring small cutting forces in high speed and precision machining. The smart cutting tool is portable and reliable for practical application in CNC machine tool.

  2. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  3. Grading evidence from test accuracy studies: what makes it challenging compared with the grading of effectiveness studies?

    PubMed

    Rogozińska, Ewelina; Khan, Khalid

    2017-06-01

    Guideline panels need to process a sizeable amount of information to issue a decision on whether to recommend a health technology or not. Grading of Recommendations Assessment, Development, and Evaluation (GRADE) is being frequently applied in guideline development to facilitate this task, typically for the synthesis of effectiveness research. Questions regarding the accuracy of medical tests are ubiquitous, and they temporally precede questions about therapy. However, literature summarising the experience of applying GRADE approach to accuracy evaluations is not as rich as one for effectiveness evidence. Type of study design (cross-sectional), two-dimensional nature of the performance measures (sensitivity and specificity), propensity towards a higher level of between-study heterogeneity, poor reporting of quality features and uncertainty about how best to assess for publication bias among other features make this task challenging. This article presents solutions adopted to addresses above challenges for judicious estimation of the strength of test accuracy evidence used to inform evidence syntheses for guideline development. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. A new method for measuring the rotational accuracy of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Zhao, Xiangsong; Gao, Weiguo; Hu, Gaofeng; Zhang, Shizhen; Zhang, Dawei

    2016-12-01

    The rotational accuracy of a machine tool spindle has critical influence upon the geometric shape and surface roughness of finished workpiece. The rotational performance of the rolling element bearings is a main factor which affects the spindle accuracy, especially in the ultra-precision machining. In this paper, a new method is developed to measure the rotational accuracy of rolling element bearings of machine tool spindles. Variable and measurable axial preload is applied to seat the rolling elements in the bearing races, which is used to simulate the operating conditions. A high-precision (radial error is less than 300 nm) and high-stiffness (radial stiffness is 600 N/μm) hydrostatic reference spindle is adopted to rotate the inner race of the test bearing. To prevent the outer race from rotating, a 2-degrees of freedom flexure hinge mechanism (2-DOF FHM) is designed. Correction factors by using stiffness analysis are adopted to eliminate the influences of 2-DOF FHM in the radial direction. Two capacitive displacement sensors with nano-resolution (the highest resolution is 9 nm) are used to measure the radial error motion of the rolling element bearing, without separating the profile error as the traditional rotational accuracy metrology of the spindle. Finally, experimental measurements are performed at different spindle speeds (100-4000 rpm) and axial preloads (75-780 N). Synchronous and asynchronous error motion values are evaluated to demonstrate the feasibility and repeatability of the developed method and instrument.

  5. Accuracy of 24- and 48-Hour Forecasts of Haines' Index

    Treesearch

    Brian E. Potter; Jonathan E. Martin

    2001-01-01

    The University of Wisconsin-Madison produces Web-accessible, 24- and 48-hour forecasts of the Haines Index (a tool used to measure the atmospheric potential for large wildfire development) for most of North America using its nonhydrostatic modeling system. The authors examined the accuracy of these forecasts using data from 1999 and 2000. Measures used include root-...

  6. Assembling a Case Study Tool Kit: 10 Tools for Teaching with Cases

    ERIC Educational Resources Information Center

    Prud'homme-Généreux, Annie

    2017-01-01

    This column provides original articles on innovations in case study teaching, assessment of the method, as well as case studies with teaching notes. The author shares the strategies and tools that teachers can use to manage a case study classroom effectively.

  7. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  8. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Diagnostic Accuracy of Natriuretic Peptides for Heart Failure in Patients with Pleural Effusion: A Systematic Review and Updated Meta-Analysis

    PubMed Central

    Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He

    2015-01-01

    Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood

  10. Factors Governing Surface Form Accuracy In Diamond Machined Components

    NASA Astrophysics Data System (ADS)

    Myler, J. K.; Page, D. A.

    1988-10-01

    Manufacturing methods for diamond machined optical surfaces, for application at infrared wavelengths, require that a new set of criteria must be recognised for the specification of surface form. Appropriate surface form parameters are discussed with particular reference to an XY cartesian geometry CNC machine. Methods for reducing surface form errors in diamond machining are discussed for certain areas such as tool wear, tool centring, and the fixturing of the workpiece. Examples of achievable surface form accuracy are presented. Traditionally, optical surfaces have been produced by use of random polishing techniques using polishing compounds and lapping tools. For lens manufacture, the simplest surface which could be created corresponded to a sphere. The sphere is a natural outcome of a random grinding and polishing process. The measurement of the surface form accuracy would most commonly be performed using a contact test gauge plate, polished to a sphere of known radius of curvature. QA would simply be achieved using a diffuse monochromatic source and looking for residual deviations between the polished surface and the test plate. The specifications governing the manufacture of surfaces using these techniques would call for the accuracy to which the generated surface should match the test plate as defined by a spherical deviations from the required curvature and a non spherical astigmatic error. Consequently, optical design software has tolerancing routines which specifically allow the designer to assess the influence of spherical error and astigmatic error on the optical performance. The creation of general aspheric surfaces is not so straightforward using conventional polishing techniques since the surface profile is non spherical and a good approximation to a power series. For infra red applications (X = 8-12p,m) numerically controlled single point diamond turning is an alternative manufacturing technology capable of creating aspheric profiles as well as

  11. Navigation Accuracy Guidelines for Orbital Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2003-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  12. A modular tooling set-up for incremental sheet forming (ISF) with subsequent stress-relief annealing under partial constraints

    NASA Astrophysics Data System (ADS)

    Maqbool, Fawad; Bambach, Markus

    2017-10-01

    Incremental sheet forming (ISF) is a manufacturing process most suitable for small-batch production of sheet metal parts. In ISF, a CNC-controlled tool moves over the sheet metal, following a specified contour to form a part of the desired geometry. This study focuses on one of the dominant process limitations associated with the ISF, i.e., the limited geometrical accuracy. In this regard, a case study is performed which shows that increased geometrical accuracy of the formed part can be achieved by a using stress-relief annealing before unclamping. To keep the tooling costs low, a modular die design consisting of a stiff metal frame and inserts made from inexpensive plastics (Sika®) were devised. After forming, the plastics inserts are removed. The metal frame supports the part during stress-relief annealing. Finite Element (FE) simulations of the manufacturing process are performed. Due to the residual stresses induced during the forming, the geometry of the formed part, from FE simulation and the actual manufacturing process, shows severe distortion upon unclamping the part. Stress relief annealing of the formed part under partial constraints exerted by the tool frame shows that a part with high geometrical accuracy can be obtained.

  13. RNA-SSPT: RNA Secondary Structure Prediction Tools.

    PubMed

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; Din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes.

  14. RNA-SSPT: RNA Secondary Structure Prediction Tools

    PubMed Central

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes. PMID:24250115

  15. Experimental and Mathematical Modeling for Prediction of Tool Wear on the Machining of Aluminium 6061 Alloy by High Speed Steel Tools

    NASA Astrophysics Data System (ADS)

    Okokpujie, Imhade Princess; Ikumapayi, Omolayo M.; Okonkwo, Ugochukwu C.; Salawu, Enesi Y.; Afolalu, Sunday A.; Dirisu, Joseph O.; Nwoke, Obinna N.; Ajayi, Oluseyi O.

    2017-12-01

    In recent machining operation, tool life is one of the most demanding tasks in production process, especially in the automotive industry. The aim of this paper is to study tool wear on HSS in end milling of aluminium 6061 alloy. The experiments were carried out to investigate tool wear with the machined parameters and to developed mathematical model using response surface methodology. The various machining parameters selected for the experiment are spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r). The experiment was designed using central composite design (CCD) in which 31 samples were run on SIEG 3/10/0010 CNC end milling machine. After each experiment the cutting tool was measured using scanning electron microscope (SEM). The obtained optimum machining parameter combination are spindle speed of 2500 rpm, feed rate of 200 mm/min, axial depth of cut of 20 mm, and radial depth of cut 1.0mm was found out to achieved the minimum tool wear as 0.213 mm. The mathematical model developed predicted the tool wear with 99.7% which is within the acceptable accuracy range for tool wear prediction.

  16. Thermal imaging as a lie detection tool at airports.

    PubMed

    Warmelink, Lara; Vrij, Aldert; Mann, Samantha; Leal, Sharon; Forrester, Dave; Fisher, Ronald P

    2011-02-01

    We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.

  17. German translation, cross-cultural adaptation and diagnostic test accuracy of three frailty screening tools : PRISMA-7, FRAIL scale and Groningen Frailty Indicator.

    PubMed

    Braun, Tobias; Grüneberg, Christian; Thiel, Christian

    2018-04-01

    Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.

  18. Accuracy of neuro-navigated cranial screw placement using optical surface imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.

    2017-02-01

    Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.

  19. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    PubMed Central

    Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien

    2013-01-01

    The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707

  20. Pre-Then-Post Testing: A Tool To Improve the Accuracy of Management Training Program Evaluation.

    ERIC Educational Resources Information Center

    Mezoff, Bob

    1981-01-01

    Explains a procedure to avoid the detrimental biases of conventional self-reports of training outcomes. The evaluation format provided is a method for using statistical procedures to increase the accuracy of self-reports by overcoming response-shift-bias. (Author/MER)

  1. Consistency and Accuracy of Multiple Pain Scales Measured in Cancer Patients from Multiple Ethnic Groups

    PubMed Central

    Ham, Ok-Kyung; Kang, Youjeong; Teng, Helen; Lee, Yaelim; Im, Eun-Ok

    2014-01-01

    Background Standardized pain-intensity measurement across different tools would enable practitioners to have confidence in clinical decision-making for pain management. Objectives The purpose was to examine the degree of agreement among unidimensional pain scales, and to determine the accuracy of the multidimensional pain scales in the diagnosis of severe pain. Methods A secondary analysis was performed. The sample included a convenience sample of 480 cancer patients recruited from both the internet and community settings. Cancer pain was measured using the Verbal Descriptor Scale (VDS), the Visual Analog Scale (VAS), the Faces Pain Scale (FPS), the McGill Pain Questionnaire-Short Form (MPQ-SF) and the Brief Pain Inventory-Short Form (BPI-SF). Data were analyzed using a multivariate analysis of variance (MANOVA) and a receiver operating characteristics (ROC) curve. Results The agreement between the VDS and VAS was 77.25%, while the agreement was 71.88% and 71.60% between the VDS and FPS, and VAS and FPS, respectively. The MPQ-SF and BPI-SF yielded high accuracy in the diagnosis of severe pain. Cutoff points for severe pain were > 8 for the MPQ-SF and > 14 for the BPI-SF, which exhibited high sensitivity and relatively low specificity. Conclusion The study found substantial agreement between the unidimensional pain scales, and high accuracy of the MPQ-SF and the BPI-SF in the diagnosis of severe pain. Implications for Practice Use of one or more pain screening tools that have been validated diagnostic accuracy and consistency will help classify pain effectively and subsequently promote optimal pain control in multi-ethnic groups of cancer patients. PMID:25068188

  2. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  3. Diagnostic accuracy of repetition tasks for the identification of specific language impairment (SLI) in bilingual children: evidence from Russian and Hebrew.

    PubMed

    Armon-Lotem, Sharon; Meir, Natalia

    2016-11-01

    Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language

  4. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  5. Accuracy and reliability of self-reported weight and height in the Sister Study

    PubMed Central

    Lin, Cynthia J; DeRoo, Lisa A; Jacobs, Sara R; Sandler, Dale P

    2012-01-01

    Objective To assess accuracy and reliability of self-reported weight and height and identify factors associated with reporting accuracy. Design Analysis of self-reported and measured weight and height from participants in the Sister Study (2003–2009), a nationwide cohort of 50,884 women aged 35–74 in the United States with a sister with breast cancer. Setting Weight and height were reported via computer-assisted telephone interview (CATI) and self-administered questionnaires, and measured by examiners. Subjects Early enrollees in the Sister Study. There were 18,639 women available for the accuracy analyses and 13,316 for the reliability analyses. Results Using weighted kappa statistics, comparisons were made between CATI responses and examiner measures to assess accuracy and CATI and questionnaire responses to assess reliability. Polytomous logistic regression evaluated factors associated with over- or under-reporting. Compared to measured values, agreement was 96% for reported height (±1 inch; weighted kappa 0.84) and 67% for weight (±3 pounds; weighted kappa 0.92). Obese women [body mass index (BMI) ≥30 kg/m2)] were more likely than normal weight women to under-report weight by ≥5% and underweight women (BMI <18.5 kg/m2) were more likely to over-report. Among normal and overweight women (18.5 kgm2≤ BMI <30 kgm2), weight cycling and lifetime weight difference ≥50 pounds were associated with over-reporting. Conclusions U.S. women in the Sister Study were reasonably reliable and accurate in reporting weight and height. Women with normal-range BMI reported most accurately. Overweight and obese women and those with weight fluctuations were less accurate, but even among obese women, few under-reported their weight by >10%. PMID:22152926

  6. Data and Tools | Concentrating Solar Power | NREL

    Science.gov Websites

    download. Solar Power tower Integrated Layout and Optimization Tool (SolarPILOT(tm)) The SolarPILOT is code rapid layout and optimization capability of the analytical DELSOL3 program with the accuracy and

  7. Verification and classification bias interactions in diagnostic test accuracy studies for fine-needle aspiration biopsy.

    PubMed

    Schmidt, Robert L; Walker, Brandon S; Cohen, Michael B

    2015-03-01

    Reliable estimates of accuracy are important for any diagnostic test. Diagnostic accuracy studies are subject to unique sources of bias. Verification bias and classification bias are 2 sources of bias that commonly occur in diagnostic accuracy studies. Statistical methods are available to estimate the impact of these sources of bias when they occur alone. The impact of interactions when these types of bias occur together has not been investigated. We developed mathematical relationships to show the combined effect of verification bias and classification bias. A wide range of case scenarios were generated to assess the impact of bias components and interactions on total bias. Interactions between verification bias and classification bias caused overestimation of sensitivity and underestimation of specificity. Interactions had more effect on sensitivity than specificity. Sensitivity was overestimated by at least 7% in approximately 6% of the tested scenarios. Specificity was underestimated by at least 7% in less than 0.1% of the scenarios. Interactions between verification bias and classification bias create distortions in accuracy estimates that are greater than would be predicted from each source of bias acting independently. © 2014 American Cancer Society.

  8. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  9. Diagnostic accuracy of physical examination tests of the ankle/foot complex: a systematic review.

    PubMed

    Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren; Cook, Chad

    2013-08-01

    Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. 3a.

  10. DIAGNOSTIC ACCURACY OF PHYSICAL EXAMINATION TESTS OF THE ANKLE/FOOT COMPLEX: A SYSTEMATIC REVIEW

    PubMed Central

    Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren

    2013-01-01

    Background: Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. Purpose: The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. Methods: A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Results: Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Conclusion: Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. Level of Evidence: 3a PMID:24175128

  11. Informatics tools to improve clinical research study implementation.

    PubMed

    Brandt, Cynthia A; Argraves, Stephanie; Money, Roy; Ananth, Gowri; Trocky, Nina M; Nadkarni, Prakash M

    2006-04-01

    There are numerous potential sources of problems when performing complex clinical research trials. These issues are compounded when studies are multi-site and multiple personnel from different sites are responsible for varying actions from case report form design to primary data collection and data entry. We describe an approach that emphasizes the use of a variety of informatics tools that can facilitate study coordination, training, data checks and early identification and correction of faulty procedures and data problems. The paper focuses on informatics tools that can help in case report form design, procedures and training and data management. Informatics tools can be used to facilitate study coordination and implementation of clinical research trials.

  12. Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services

    ERIC Educational Resources Information Center

    Wang, Guoquan

    2013-01-01

    High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…

  13. Neutron Reflectivity as a Tool for Physics-Based Studies of Model Bacterial Membranes.

    PubMed

    Barker, Robert D; McKinley, Laura E; Titmuss, Simon

    2016-01-01

    The principles of neutron reflectivity and its application as a tool to provide structural information at the (sub-) molecular unit length scale from models for bacterial membranes are described. The model membranes can take the form of a monolayer for a single leaflet spread at the air/water interface, or bilayers of increasing complexity at the solid/liquid interface. Solid-supported bilayers constrain the bilayer to 2D but can be used to characterize interactions with antimicrobial peptides and benchmark high throughput lab-based techniques. Floating bilayers allow for membrane fluctuations, making the phase behaviour more representative of native membranes. Bilayers of varying levels of compositional accuracy can now be constructed, facilitating studies with aims that range from characterizing the fundamental physical interactions, through to the characterization of accurate mimetics for the inner and outer membranes of Gram-negative bacteria. Studies of the interactions of antimicrobial peptides with monolayer and bilayer models for the inner and outer membranes have revealed information about the molecular control of the outer membrane permeability, and the mode of interaction of antimicrobials with both inner and outer membranes.

  14. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, Martin; Giebel, Gregor; Skov Nielsen, Torben; Hahmann, Andrea; Sørensen, Poul; Madsen, Henrik

    2013-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the title "Integrated Wind Power Planning Tool". The goal is to integrate a mesoscale numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. Using the state-of-the-art mesoscale NWP model Weather Research & Forecasting model (WRF) the forecast error is sought quantified in dependence of the time scale involved. This task constitutes a preparative study for later implementation of features accounting for NWP forecast errors in the DTU Wind Energy maintained Corwind code - a long term wind power planning tool. Within the framework of PSO 10464 research related to operational short term wind power prediction will be carried out, including a comparison of forecast quality at different mesoscale NWP model resolutions and development of a statistical wind power prediction tool taking input from WRF. The short term prediction part of the project is carried out in collaboration with ENFOR A/S; a Danish company that specialises in forecasting and optimisation for the energy sector. The integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated short term prediction tool constitutes scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy

  15. An Observational Study to Evaluate the Usability and Intent to Adopt an Artificial Intelligence–Powered Medication Reconciliation Tool

    PubMed Central

    Yuan, Michael Juntao; Poonawala, Robina

    2016-01-01

    Background Medication reconciliation (the process of creating an accurate list of all medications a patient is taking) is a widely practiced procedure to reduce medication errors. It is mandated by the Joint Commission and reimbursed by Medicare. Yet, in practice, medication reconciliation is often not effective owing to knowledge gaps in the team. A promising approach to improve medication reconciliation is to incorporate artificial intelligence (AI) decision support tools into the process to engage patients and bridge the knowledge gap. Objective The aim of this study was to improve the accuracy and efficiency of medication reconciliation by engaging the patient, the nurse, and the physician as a team via an iPad tool. With assistance from the AI agent, the patient will review his or her own medication list from the electronic medical record (EMR) and annotate changes, before reviewing together with the physician and making decisions on the shared iPad screen. Methods In this study, we developed iPad-based software tools, with AI decision support, to engage patients to “self-service” medication reconciliation and then share the annotated reconciled list with the physician. To evaluate the software tool’s user interface and workflow, a small number of patients (10) in a primary care clinic were recruited, and they were observed through the whole process during a pilot study. The patients are surveyed for the tool’s usability afterward. Results All patients were able to complete the medication reconciliation process correctly. Every patient found at least one error or other issues with their EMR medication lists. All of them reported that the tool was easy to use, and 8 of 10 patients reported that they will use the tool in the future. However, few patients interacted with the learning modules in the tool. The physician and nurses reported the tool to be easy-to-use, easy to integrate into existing workflow, and potentially time-saving. Conclusions We have

  16. [Intelligent systems tools in the diagnosis of acute coronary syndromes: A systemic review].

    PubMed

    Sprockel, John; Tejeda, Miguel; Yate, José; Diaztagle, Juan; González, Enrique

    2017-03-27

    Acute myocardial infarction is the leading cause of non-communicable deaths worldwide. Its diagnosis is a highly complex task, for which modelling through automated methods has been attempted. A systematic review of the literature was performed on diagnostic tests that applied intelligent systems tools in the diagnosis of acute coronary syndromes. A systematic review of the literature is presented using Medline, Embase, Scopus, IEEE/IET Electronic Library, ISI Web of Science, Latindex and LILACS databases for articles that include the diagnostic evaluation of acute coronary syndromes using intelligent systems. The review process was conducted independently by 2 reviewers, and discrepancies were resolved through the participation of a third person. The operational characteristics of the studied tools were extracted. A total of 35 references met the inclusion criteria. In 22 (62.8%) cases, neural networks were used. In five studies, the performances of several intelligent systems tools were compared. Thirteen studies sought to perform diagnoses of all acute coronary syndromes, and in 22, only infarctions were studied. In 21 cases, clinical and electrocardiographic aspects were used as input data, and in 10, only electrocardiographic data were used. Most intelligent systems use the clinical context as a reference standard. High rates of diagnostic accuracy were found with better performance using neural networks and support vector machines, compared with statistical tools of pattern recognition and decision trees. Extensive evidence was found that shows that using intelligent systems tools achieves a greater degree of accuracy than some clinical algorithms or scales and, thus, should be considered appropriate tools for supporting diagnostic decisions of acute coronary syndromes. Copyright © 2017 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.

  17. Effects of tools inserted through snake-like surgical manipulators.

    PubMed

    Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran

    2014-01-01

    Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.

  18. Improved Diagnostic Accuracy of SPECT Through Statistical Analysis and the Detection of Hot Spots at the Primary Sensorimotor Area for the Diagnosis of Alzheimer Disease in a Community-Based Study: "The Osaki-Tajiri Project".

    PubMed

    Kaneta, Tomohiro; Nakatsuka, Masahiro; Nakamura, Kei; Seki, Takashi; Yamaguchi, Satoshi; Tsuboi, Masahiro; Meguro, Kenichi

    2016-01-01

    SPECT is an important diagnostic tool for dementia. Recently, statistical analysis of SPECT has been commonly used for dementia research. In this study, we evaluated the accuracy of visual SPECT evaluation and/or statistical analysis for the diagnosis (Dx) of Alzheimer disease (AD) and other forms of dementia in our community-based study "The Osaki-Tajiri Project." Eighty-nine consecutive outpatients with dementia were enrolled and underwent brain perfusion SPECT with 99mTc-ECD. Diagnostic accuracy of SPECT was tested using 3 methods: visual inspection (SPECT Dx), automated diagnostic tool using statistical analysis with easy Z-score imaging system (eZIS Dx), and visual inspection plus eZIS (integrated Dx). Integrated Dx showed the highest sensitivity, specificity, and accuracy, whereas eZIS was the second most accurate method. We also observed that a higher than expected rate of SPECT images indicated false-negative cases of AD. Among these, 50% showed hypofrontality and were diagnosed as frontotemporal lobar degeneration. These cases typically showed regional "hot spots" in the primary sensorimotor cortex (ie, a sensorimotor hot spot sign), which we determined were associated with AD rather than frontotemporal lobar degeneration. We concluded that the diagnostic abilities were improved by the integrated use of visual assessment and statistical analysis. In addition, the detection of a sensorimotor hot spot sign was useful to detect AD when hypofrontality is present and improved the ability to properly diagnose AD.

  19. Test expectancy affects metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  20. Design of testbed and emulation tools

    NASA Technical Reports Server (NTRS)

    Lundstrom, S. F.; Flynn, M. J.

    1986-01-01

    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.

  1. Methods specification for diagnostic test accuracy studies in fine-needle aspiration cytology: a survey of reporting practice.

    PubMed

    Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J

    2012-01-01

    Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.

  2. Precision tool holder with flexure-adjustable, three degrees of freedom for a four-axis lathe

    DOEpatents

    Bono, Matthew J [Pleasanton, CA; Hibbard, Robin L [Livermore, CA

    2008-03-04

    A precision tool holder for precisely positioning a single point cutting tool on 4-axis lathe, such that the center of the radius of the tool nose is aligned with the B-axis of the machine tool, so as to facilitate the machining of precision meso-scale components with complex three-dimensional shapes with sub-.mu.m accuracy on a four-axis lathe. The device is designed to fit on a commercial diamond turning machine and can adjust the cutting tool position in three orthogonal directions with sub-micrometer resolution. In particular, the tool holder adjusts the tool position using three flexure-based mechanisms, with two flexure mechanisms adjusting the lateral position of the tool to align the tool with the B-axis, and a third flexure mechanism adjusting the height of the tool. Preferably, the flexures are driven by manual micrometer adjusters. In this manner, this tool holder simplifies the process of setting a tool with sub-.mu.m accuracy, to substantially reduce the time required to set the tool.

  3. Overlay accuracy with respect to device scaling

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Laidler, David; Cheng, Shaunee

    2012-03-01

    Overlay metrology performance is usually reported as repeatability, matching between tools or optics aberrations distorting the measurement (Tool induced shift or TIS). Over the last few years, improvement of these metrics by the tool suppliers has been impressive. But, what about accuracy? Using different target types, we have already reported small differences in the mean value as well as fingerprint [1]. These differences make the correctables questionable. Which target is correct and therefore which translation, scaling etc. values should be fed back to the scanner? In this paper we investigate the sources of these differences, using several approaches. First, we measure the response of different targets to offsets programmed in a test vehicle. Second, we check the response of the same overlay targets to overlay errors programmed into the scanner. We compare overlay target designs; what is the contribution of the size of the features that make up the target? We use different overlay measurement techniques; is DBO (Diffraction Based Overlay) more accurate than IBO (Image Based Overlay)? We measure overlay on several stacks; what is the stack contribution to inaccuracy? In conclusion, we offer an explanation for the observed differences and propose a solution to reduce them.

  4. A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.

    PubMed

    Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei

    2014-12-16

    The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately

  5. Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.

    PubMed

    Zhao, Baoliang; Nelson, Carl A

    2016-10-01

    Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.

  6. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  7. Exploration Medical System Trade Study Tools Overview

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Myers, J.; Latorella, K.; Cerro, J.; Hanson, A.; Hailey, M.; Middour, C.

    2018-01-01

    ExMC is creating an ecosystem of tools to enable well-informed medical system trade studies. The suite of tools address important system implementation aspects of the space medical capabilities trade space and are being built using knowledge from the medical community regarding the unique aspects of space flight. Two integrating models, a systems engineering model and a medical risk analysis model, tie the tools together to produce an integrated assessment of the medical system and its ability to achieve medical system target requirements. This presentation will provide an overview of the various tools that are a part of the tool ecosystem. Initially, the presentation's focus will address the tools that supply the foundational information to the ecosystem. Specifically, the talk will describe how information that describes how medicine will be practiced is captured and categorized for efficient utilization in the tool suite. For example, the talk will include capturing what conditions will be planned for in-mission treatment, planned medical activities (e.g., periodic physical exam), required medical capabilities (e.g., provide imaging), and options to implement the capabilities (e.g., an ultrasound device). Database storage and configuration management will also be discussed. The presentation will include an overview of how these information tools will be tied to parameters in a Systems Modeling Language (SysML) model, allowing traceability to system behavioral, structural, and requirements content. The discussion will also describe an HRP-led enhanced risk assessment model developed to provide quantitative insight into each capability's contribution to mission success. Key outputs from these various tools, to be shared with the space medical and exploration mission development communities, will be assessments of medical system implementation option satisfaction of requirements and per-capability contributions toward achieving requirements.

  8. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  9. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    PubMed

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item

  10. The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images.

    PubMed

    Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J

    2016-09-01

    Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%-74%) and 87% (95% CI, 86%-88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91-0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver timely, accurate, and cost-effective image analysis.

  11. Accuracy assessment of surgical planning and three-dimensional-printed patient-specific guides for orthopaedic osteotomies.

    PubMed

    Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart

    2017-06-01

    This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.

  12. A Systematic Review to Uncover a Universal Protocol for Accuracy Assessment of 3-Dimensional Virtually Planned Orthognathic Surgery.

    PubMed

    Gaber, Ramy M; Shaheen, Eman; Falter, Bart; Araya, Sebastian; Politis, Constantinus; Swennen, Gwen R J; Jacobs, Reinhilde

    2017-11-01

    The aim of this study was to systematically review methods used for assessing the accuracy of 3-dimensional virtually planned orthognathic surgery in an attempt to reach an objective assessment protocol that could be universally used. A systematic review of the currently available literature, published until September 12, 2016, was conducted using PubMed as the primary search engine. We performed secondary searches using the Cochrane Database, clinical trial registries, Google Scholar, and Embase, as well as a bibliography search. Included articles were required to have stated clearly that 3-dimensional virtual planning was used and accuracy assessment performed, along with validation of the planning and/or assessment method. Descriptive statistics and quality assessment of included articles were performed. The initial search yielded 1,461 studies. Only 7 studies were included in our review. An important variability was found regarding methods used for 1) accuracy assessment of virtually planned orthognathic surgery or 2) validation of the tools used. Included studies were of moderate quality; reviewers' agreement regarding quality was calculated to be 0.5 using the Cohen κ test. On the basis of the findings of this review, it is evident that the literature lacks consensus regarding accuracy assessment. Hence, a protocol is suggested for accuracy assessment of virtually planned orthognathic surgery with the lowest margin of error. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Accuracy of a Digital Weight Scale Relative to the Nintendo Wii in Measuring Limb Load Asymmetry

    PubMed Central

    Kumar, NS Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah

    2014-01-01

    [Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry. PMID:25202181

  14. Accuracy of a digital weight scale relative to the nintendo wii in measuring limb load asymmetry.

    PubMed

    Kumar, Ns Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah

    2014-08-01

    [Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry.

  15. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  16. Results of a remote multiplexer/digitizer unit accuracy and environmental study

    NASA Technical Reports Server (NTRS)

    Wilner, D. O.

    1977-01-01

    A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.

  17. Methodology to Define Delivery Accuracy Under Current Day ATC Operations

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Robinson, John E., III

    2015-01-01

    In order to enable arrival management concepts and solutions in a NextGen environment, ground- based sequencing and scheduling functions have been developed to support metering operations in the National Airspace System. These sequencing and scheduling algorithms as well as tools are designed to aid air traffic controllers in developing an overall arrival strategy. The ground systems being developed will support the management of aircraft to their Scheduled Times of Arrival (STAs) at flow-constrained meter points. This paper presents a methodology for determining the undelayed delivery accuracy for current day air traffic control operations. This new method analyzes the undelayed delivery accuracy at meter points in order to understand changes of desired flow rates as well as enabling definition of metrics that will allow near-future ground automation tools to successfully achieve desired separation at the meter points. This enables aircraft to meet their STAs while performing high precision arrivals. The research presents a possible implementation that would allow delivery performance of current tools to be estimated and delivery accuracy requirements for future tools to be defined, which allows analysis of Estimated Time of Arrival (ETA) accuracy for Time-Based Flow Management (TBFM) and the FAA's Traffic Management Advisor (TMA). TMA is a deployed system that generates scheduled time-of-arrival constraints for en- route air traffic controllers in the US. This new method of automated analysis provides a repeatable evaluation of the delay metrics for current day traffic, new releases of TMA, implementation of different tools, and across different airspace environments. This method utilizes a wide set of data from the Operational TMA-TBFM Repository (OTTR) system, which processes raw data collected by the FAA from operational TMA systems at all ARTCCs in the nation. The OTTR system generates daily reports concerning ATC status, intent and actions. Due to its

  18. Updating Risk Prediction Tools: A Case Study in Prostate Cancer

    PubMed Central

    Ankerst, Donna P.; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J.; Feng, Ziding; Sanda, Martin G.; Partin, Alan W.; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M.

    2013-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [−2]proPSA measured on an external case control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. PMID:22095849

  19. Updating risk prediction tools: a case study in prostate cancer.

    PubMed

    Ankerst, Donna P; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J; Feng, Ziding; Sanda, Martin G; Partin, Alan W; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M

    2012-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically, the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [-2]proPSA measured on an external case-control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Diagnostic accuracy of serological diagnosis of hepatitis C and B using dried blood spot samples (DBS): two systematic reviews and meta-analyses.

    PubMed

    Lange, Berit; Cohn, Jennifer; Roberts, Teri; Camp, Johannes; Chauffour, Jeanne; Gummadi, Nina; Ishizaki, Azumi; Nagarathnam, Anupriya; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Easterbrook, Philippa; Denkinger, Claudia M

    2017-11-01

    Dried blood spots (DBS) are a convenient tool to enable diagnostic testing for viral diseases due to transport, handling and logistical advantages over conventional venous blood sampling. A better understanding of the performance of serological testing for hepatitis C (HCV) and hepatitis B virus (HBV) from DBS is important to enable more widespread use of this sampling approach in resource limited settings, and to inform the 2017 World Health Organization (WHO) guidance on testing for HBV/HCV. We conducted two systematic reviews and meta-analyses on the diagnostic accuracy of HCV antibody (HCV-Ab) and HBV surface antigen (HBsAg) from DBS samples compared to venous blood samples. MEDLINE, EMBASE, Global Health and Cochrane library were searched for studies that assessed diagnostic accuracy with DBS and agreement between DBS and venous sampling. Heterogeneity of results was assessed and where possible a pooled analysis of sensitivity and specificity was performed using a bivariate analysis with maximum likelihood estimate and 95% confidence intervals (95%CI). We conducted a narrative review on the impact of varying storage conditions or limits of detection in subsets of samples. The QUADAS-2 tool was used to assess risk of bias. For the diagnostic accuracy of HBsAg from DBS compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review. Pooled sensitivity and specificity were 98% (95%CI:95%-99%) and 100% (95%CI:99-100%), respectively. For the diagnostic accuracy of HCV-Ab from DBS, 19 studies were included in a pooled quantitative meta-analysis, and 23 studies were included in a narrative review. Pooled estimates of sensitivity and specificity were 98% (CI95%:95-99) and 99% (CI95%:98-100), respectively. Overall quality of studies and heterogeneity were rated as moderate in both systematic reviews. HCV-Ab and HBsAg testing using DBS compared to venous blood sampling was associated with excellent diagnostic accuracy

  1. Students' Problem Solving as Mediated by Their Cognitive Tool Use: A Study of Tool Use Patterns

    ERIC Educational Resources Information Center

    Liu, M.; Horton, L. R.; Corliss, S. B.; Svinicki, M. D.; Bogard, T.; Kim, J.; Chang, M.

    2009-01-01

    The purpose of this study was to use multiple data sources, both objective and subjective, to capture students' thinking processes as they were engaged in problem solving, examine the cognitive tool use patterns, and understand what tools were used and why they were used. The findings of this study confirmed previous research and provided clear…

  2. Intervendor Differences in the Accuracy of Detecting Regional Functional Abnormalities: A Report From the EACVI-ASE Strain Standardization Task Force.

    PubMed

    Mirea, Oana; Pagourelias, Efstathios D; Duchenne, Jurgen; Bogaert, Jan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe

    2018-01-01

    The purpose of this study was to compare the accuracy of vendor-specific and independent strain analysis tools to detect regional myocardial function abnormality in a clinical setting. Speckle tracking echocardiography has been considered a promising tool for the quantitative assessment of regional myocardial function. However, the potential differences among speckle tracking software with regard to their accuracy in identifying regional abnormality has not been studied extensively. Sixty-three subjects (5 healthy volunteers and 58 patients) were examined with 7 different ultrasound machines during 5 days. All patients had experienced a previous myocardial infarction, which was characterized by cardiac magnetic resonance with late gadolinium enhancement. Segmental peak systolic (PS), end-systolic (ES) and post-systolic strain (PSS) measurements were obtained with 6 vendor-specific software tools and 2 independent strain analysis tools. Strain parameters were compared between fully scarred and scar-free segments. Receiver-operating characteristic curves testing the ability of strain parameters and derived indexes to discriminate between these segments were compared among vendors. The average strain values calculated for normal segments ranged from -15.1% to -20.7% for PS, -14.9% to -20.6% for ES, and -16.1% to -21.4% for PSS. Significantly lower values of strain (p < 0.05) were found in segments with transmural scar by all vendors, with values ranging from -7.4% to -11.1% for PS, -7.7% to -10.8% for ES, and -10.5% to -14.3% for PSS. Accuracy in identifying transmural scar ranged from acceptable to excellent (area under the curve 0.74 to 0.83 for PS and ES and 0.70 to 0.78 for PSS). Significant differences were found among vendors (p < 0.05). All vendors had a significantly lower accuracy to detect scars in the basal segments compared with scars in the apex (p < 0.05). The accuracy of identifying regional abnormality differs significantly among

  3. Testing the Efficacy of an Education-Based Training Tool to Improve Diagnostic Accuracy of Obsessive-Compulsive Disorder

    ERIC Educational Resources Information Center

    Glazier, Kimberly

    2014-01-01

    Objective: The study aimed to increase awareness of OCD symptomatology among doctoral students in clinical, counseling and school psychology through the implementation of a comprehensive OCD education-based training tool. Method: The program directors across all APA-accredited clinical, counseling, and school psychology doctoral graduate programs…

  4. Toward an Attention-Based Diagnostic Tool for Patients With Locked-in Syndrome.

    PubMed

    Lesenfants, Damien; Habbal, Dina; Chatelle, Camille; Soddu, Andrea; Laureys, Steven; Noirhomme, Quentin

    2018-03-01

    Electroencephalography (EEG) has been proposed as a supplemental tool for reducing clinical misdiagnosis in severely brain-injured populations helping to distinguish conscious from unconscious patients. We studied the use of spectral entropy as a measure of focal attention in order to develop a motor-independent, portable, and objective diagnostic tool for patients with locked-in syndrome (LIS), answering the issues of accuracy and training requirement. Data from 20 healthy volunteers, 6 LIS patients, and 10 patients with a vegetative state/unresponsive wakefulness syndrome (VS/UWS) were included. Spectral entropy was computed during a gaze-independent 2-class (attention vs rest) paradigm, and compared with EEG rhythms (delta, theta, alpha, and beta) classification. Spectral entropy classification during the attention-rest paradigm showed 93% and 91% accuracy in healthy volunteers and LIS patients respectively. VS/UWS patients were at chance level. EEG rhythms classification reached a lower accuracy than spectral entropy. Resting-state EEG spectral entropy could not distinguish individual VS/UWS patients from LIS patients. The present study provides evidence that an EEG-based measure of attention could detect command-following in patients with severe motor disabilities. The entropy system could detect a response to command in all healthy subjects and LIS patients, while none of the VS/UWS patients showed a response to command using this system.

  5. Knowledge Mapping: A Multipurpose Task Analysis Tool.

    ERIC Educational Resources Information Center

    Esque, Timm J.

    1988-01-01

    Describes knowledge mapping, a tool developed to increase the objectivity and accuracy of task difficulty ratings for job design. Application in a semiconductor manufacturing environment is discussed, including identifying prerequisite knowledge for a given task; establishing training development priorities; defining knowledge levels; identifying…

  6. Accuracy Study of a 2-Component Point Doppler Velocimeter (PDV)

    NASA Technical Reports Server (NTRS)

    Kuhlman, John; Naylor, Steve; James, Kelly; Ramanath, Senthil

    1997-01-01

    A two-component Point Doppler Velocimeter (PDV) which has recently been developed is described, and a series of velocity measurements which have been obtained to quantify the accuracy of the PDV system are summarized. This PDV system uses molecular iodine vapor cells as frequency discriminating filters to determine the Doppler shift of laser light which is scattered off of seed particles in a flow. The majority of results which have been obtained to date are for the mean velocity of a rotating wheel, although preliminary data are described for fully-developed turbulent pipe flow. Accuracy of the present wheel velocity data is approximately +/- 1 % of full scale, while linearity of a single channel is on the order of +/- 0.5 % (i.e., +/- 0.6 m/sec and +/- 0.3 m/sec, out of 57 m/sec, respectively). The observed linearity of these results is on the order of the accuracy to which the speed of the rotating wheel has been set for individual data readings. The absolute accuracy of the rotating wheel data is shown to be consistent with the level of repeatability of the cell calibrations. The preliminary turbulent pipe flow data show consistent turbulence intensity values, and mean axial velocity profiles generally agree with pitot probe data. However, there is at present an offset error in the radial velocity which is on the order of 5-10 % of the mean axial velocity.

  7. Voice Identification: Levels-of-Processing and the Relationship between Prior Description Accuracy and Recognition Accuracy.

    ERIC Educational Resources Information Center

    Walter, Todd J.

    A study examined whether a person's ability to accurately identify a voice is influenced by factors similar to those proposed by the Supreme Court for eyewitness identification accuracy. In particular, the Supreme Court has suggested that a person's prior description accuracy of a suspect, degree of attention to a suspect, and confidence in…

  8. A noncontact laser technique for circular contouring accuracy measurement

    NASA Astrophysics Data System (ADS)

    Wang, Charles; Griffin, Bob

    2001-02-01

    The worldwide competition in manufacturing frequently requires the high-speed machine tools to deliver contouring accuracy in the order of a few micrometers, while moving at relatively high feed rates. Traditional test equipment is rather limited in its capability to measure contours of small radius at high speed. Described here is a new noncontact laser measurement technique for the test of circular contouring accuracy. This technique is based on a single-aperture laser Doppler displacement meter with a flat mirror as the target. It is of a noncontact type with the ability to vary the circular path radius continuously at data rates of up to 1000 Hz. Using this instrument, the actual radius, feed rate, velocity, and acceleration profiles can also be determined. The basic theory of operation, the hardware setup, the data collection, the data processing, and the error budget are discussed.

  9. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  10. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems.

    PubMed

    Burnos, Piotr; Rys, Dawid

    2017-09-07

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy.

  11. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    PubMed

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding

  12. The accuracy of ultrasound for measurement of mobile- bearing motion.

    PubMed

    Aigner, Christian; Radl, Roman; Pechmann, Michael; Rehak, Peter; Stacher, Rudolf; Windhager, Reinhard

    2004-04-01

    After anterior cruciate ligament-sacrificing total knee replacement, mobile bearings sometimes have paradoxic movement but the implications of such movement on function, wear, and implant survival are not known. To study this potential problem accurate, reliable, and widely available inexpensive tools for in vivo mobile-bearing motion analyses are needed. We developed a method using an 8-MHz ultrasound to analyze mobile-bearing motion and ascertained accuracy, precision, and reliability compared with plain and standard digital radiographs. The anterior rim of the mobile bearing was the target for all methods. The radiographs were taken in a horizontal plane at neutral rotation and incremental external and internal rotations. Five investigators examined four positions of the mobile bearing with all three methods. The accuracy and precision were: ultrasound, 0.7 mm and 0.2 mm; digital radiograph, 0.4 mm and 0.2 mm; and plain radiographs, 0.7 mm and 0.3 mm. The interrater and intrarater reliability ranged between 0.3 to 0.4 mm and 0.1 to 0.2 mm, respectively. The difference between the methods was not significant for neutral rotation but ultrasound was significantly more accurate than any one degree of rotation or higher. Ultrasound of 8 MHz provides an accuracy and reliability that is suitable for evaluation of in vivo meniscal bearing motion. Whether this method or others are sufficiently accurate to detect motion leading to abnormal wear is not known.

  13. Accuracy of clinical diagnosis versus the World Health Organization case definition in the Amoy Garden SARS cohort.

    PubMed

    Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert

    2003-11-01

    To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.

  14. Accuracy of Carbohydrate Counting in Adults

    PubMed Central

    Rushton, Wanda E.

    2016-01-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  15. How to address patients' defences: a pilot study of the accuracy of defence interpretations and alliance.

    PubMed

    Junod, Olivier; de Roten, Yves; Martinez, Elena; Drapeau, Martin; Despland, Jean-Nicolas

    2005-12-01

    This pilot study examined the accuracy of therapist defence interpretations (TAD) in high-alliance patients (N = 7) and low-alliance patients (N = 8). TAD accuracy was assessed in the two subgroups by comparing for each case the patient's most frequent defensive level with the most frequent defensive level addressed by the therapist when making defence interpretations. Results show that in high-alliance patient-therapist dyads, the therapists tend to address accurate or higher (more mature) defensive level than patients most frequent level. On the other hand, the therapists address lower (more immature) defensive level in low-alliance dyads. These results are discussed along with possible ways to better assess TAD accuracy.

  16. Reservoir analog studies using multimodel photogrammetry: A new tool for the petroleum industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dueholm, K.S.; Olsen, T.

    1993-12-01

    Attempts to increase the recovery from hydrocarbon reservoirs involve high-precision geological work. Siliciclastic depositional environments must be interpreted accurately and combined with an analysis of the three-dimensional shape of the sand bodies. An advanced photogrammetric method called multimodel stereo restitution is a potential new tool for the petroleum industry when outcrop investigations are necessary, such as in reservoir analog studies. The method is based on very simple field photography techniques, allowing the geologist to use his own standard small-frame camera. It can be applied to geological studies of virtually any scale, and outcrop mapping is significantly improved in detail, accuracy,more » and volume. The method is especially useful when investigating poorly accessible exposures on steep mountain faces and canyon walls. The use of multimodel photogrammetry is illustrated by a study of Upper Cretaceous deltaic sediments from the Atane Formation of West Greenland. Further potential applications of the method in petroleum explorations are discussed. True-scale mapping of lithologies in large continuous exposures can be used in understanding basin evolution and in seismic modeling. Close-range applications can be used when modeling fault geometries, and in studies of individual bed forms, clay laminae, cemented horizons, and diagenetic fronts.« less

  17. A Comparison of Parameter Study Creation and Job Submission Tools

    NASA Technical Reports Server (NTRS)

    DeVivo, Adrian; Yarrow, Maurice; McCann, Karen M.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We consider the differences between the available general purpose parameter study and job submission tools. These tools necessarily share many features, but frequently with differences in the way they are designed and implemented For this class of features, we will only briefly outline the essential differences. However we will focus on the unique features which distinguish the ILab parameter study and job submission tool from other packages, and which make the ILab tool easier and more suitable for use in our research and engineering environment.

  18. Diagnostic Accuracy of Cone-beam Computed Tomography and Conventional Radiography on Apical Periodontitis: A Systematic Review and Meta-analysis.

    PubMed

    Leonardi Dutra, Kamile; Haas, Letícia; Porporatti, André Luís; Flores-Mir, Carlos; Nascimento Santos, Juliana; Mezzomo, Luis André; Corrêa, Márcio; De Luca Canto, Graziela

    2016-03-01

    Endodontic diagnosis depends on accurate radiographic examination. Assessment of the location and extent of apical periodontitis (AP) can influence treatment planning and subsequent treatment outcomes. Therefore, this systematic review and meta-analysis assessed the diagnostic accuracy of conventional radiography and cone-beam computed tomographic (CBCT) imaging on the discrimination of AP from no lesion. Eight electronic databases with no language or time limitations were searched. Articles in which the primary objective was to evaluate the accuracy (sensitivity and specificity) of any type of radiographic technique to assess AP in humans were selected. The gold standard was the histologic examination for actual AP (in vivo) or in situ visualization of bone defects for induced artificial AP (in vitro). Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v.5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark) and MetaDisc v.1.4. software (Unit of Clinical Biostatistics Team of the Ramón y Cajal Hospital, Madrid, Spain). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. Only 9 studies met the inclusion criteria and were subjected to a qualitative analysis. A meta-analysis was conducted on 6 of these articles. All of these articles studied artificial AP with induced bone defects. The accuracy values (area under the curve) were 0.96 for CBCT imaging, 0.73 for conventional periapical radiography, and 0.72 for digital periapical radiography. No evidence was found for panoramic radiography. Periapical radiographs (digital and conventional) reported good diagnostic accuracy on the discrimination of artificial AP from no lesions, whereas CBCT imaging showed excellent accuracy values. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Accuracy of Continuous Glucose Monitoring During Three Closed-Loop Home Studies Under Free-Living Conditions.

    PubMed

    Thabit, Hood; Leelarathna, Lalantha; Wilinska, Malgorzata E; Elleri, Daniella; Allen, Janet M; Lubina-Solomon, Alexandra; Walkinshaw, Emma; Stadler, Marietta; Choudhary, Pratik; Mader, Julia K; Dellweg, Sibylle; Benesch, Carsten; Pieber, Thomas R; Arnolds, Sabine; Heller, Simon R; Amiel, Stephanie A; Dunger, David; Evans, Mark L; Hovorka, Roman

    2015-11-01

    Closed-loop (CL) systems modulate insulin delivery based on glucose levels measured by a continuous glucose monitor (CGM). Accuracy of the CGM affects CL performance and safety. We evaluated the accuracy of the Freestyle Navigator(®) II CGM (Abbott Diabetes Care, Alameda, CA) during three unsupervised, randomized, open-label, crossover home CL studies. Paired CGM and capillary glucose values (10,597 pairs) were collected from 57 participants with type 1 diabetes (41 adults [mean±SD age, 39±12 years; mean±SD hemoglobin A1c, 7.9±0.8%] recruited at five centers and 16 adolescents [mean±SD age, 15.6±3.6 years; mean±SD hemoglobin A1c, 8.1±0.8%] recruited at two centers). Numerical accuracy was assessed by absolute relative difference (ARD) and International Organization for Standardization (ISO) 15197:2013 15/15% limits, and clinical accuracy was assessed by Clarke error grid analysis. Total duration of sensor use was 2,002 days (48,052 h). Overall sensor accuracy for the capillary glucose range (1.1-27.8 mmol/L) showed mean±SD and median (interquartile range) ARD of 14.2±15.5% and 10.0% (4.5%, 18.4%), respectively. Lowest mean ARD was observed in the hyperglycemic range (9.8±8.8%). Over 95% of pairs were in combined Clarke error grid Zones A and B (A, 80.1%, B, 16.2%). Overall, 70.0% of the sensor readings satisfied ISO criteria. Mean ARD was consistent (12.3%; 95% of the values fall within ±3.7%) and not different between participants (P=0.06) within the euglycemic and hyperglycemic range, when CL is actively modulating insulin delivery. Consistent accuracy of the CGM within the euglycemic-hyperglycemic range using the Freestyle Navigator II was observed and supports its use in home CL studies. Our results may contribute toward establishing normative CGM performance criteria for unsupervised home use of CL.

  20. Evaluation of Automatic Vehicle Location accuracy

    DOT National Transportation Integrated Search

    1999-01-01

    This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...

  1. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  2. 'Scalp coordinate system': a new tool to accurately describe cutaneous lesions on the scalp: a pilot study.

    PubMed

    Alexander, William; Miller, George; Alexander, Preeya; Henderson, Michael A; Webb, Angela

    2018-06-12

    Skin cancers are extremely common and the incidence increases with age. Care for patients with multiple or complicated skin cancers often require multidisciplinary input involving a general practitioner, dermatologist, plastic surgeon and/or radiation oncologist. Timely, efficient care of these patients relies on precise and effective communication between all parties. Until now, descriptions regarding the location of lesions on the scalp have been inaccurate, which can lead to error with the incorrect lesion being excised or biopsied. A novel technique for accurately and efficiently describing the location of lesions on the scalp, using a coordinate system, is described (the 'scalp coordinate system' (SCS)). This method was tested in a pilot study by clinicians typically involved in the care of patients with cutaneous malignancies. A mannequin scalp was used in the study. The SCS significantly improved the accuracy in the ability to both describe and locate lesions on the scalp. This improved accuracy comes at a minor time cost. The direct and indirect costs arising from poor communication between medical subspecialties (particularly relevant in surgical procedures) are immense. An effective tool used by all involved clinicians is long overdue particularly in patients with scalps with extensive actinic damage, scarring or innocuous biopsy sites. The SCS provides the opportunity to improve outcomes for both the patient and healthcare system. © 2018 Royal Australasian College of Surgeons.

  3. Trade-off study and computer simulation for assessing spacecraft pointing accuracy and stability capabilities

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.; Powers, Richard M.

    1997-05-01

    A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.

  4. Accuracy of the surface electromyography RMS processing for the diagnosis of myogenous temporomandibular disorder.

    PubMed

    Berni, Kelly Cristina dos Santos; Dibai-Filho, Almir Vieira; Pires, Paulo Fernandes; Rodrigues-Bigaton, Delaine

    2015-08-01

    Due to the multifactor etiology of temporomandibular disorder (TMD), the precise diagnosis remains a matter of debate and validated diagnostic tools are needed. The aim was to determine the accuracy of surface electromyography (sEMG) activity, assessed in the amplitude domain by the root mean square (RMS), in the diagnosis of TMD. One hundred twenty-three volunteers were evaluated using the Research Diagnostic Criteria for Temporomandibular Disorders and distributed into two groups: women with myogenous TMD (n=80) and women without TMD (n=43). The volunteers were then submitted to sEMG evaluation of the anterior temporalis, masseter and suprahyoid muscles at rest and during maximum voluntary teeth clenching (MVC) on parafilm. The accuracy, sensitivity and specificity of the muscle activity were analyzed. Differences between groups were found in all muscles analyzed at rest as well as in the masseter and suprahyoid muscles during MVC on parafilm. Moderate accuracy (AUC: 0.74-0.84) of the RMS sEMG was found in all muscles regarding the diagnosis of TMD at rest and in the suprahyoid muscles during MVC on parafilm. Moreover, sensitivity ranging from 71.3% to 80% and specificity from 60.5% to 76.6%. In contrast, RMS sEMG did not exhibit acceptable degrees of accuracy in the other masticatory muscles during MVC on parafilm. It was concluded that the RMS sEMG is a complementary tool for clinical diagnosis of the myogenous TMD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system

  6. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system

  7. The importance of including dynamic social networks when modeling epidemics of airborne infections: does increasing complexity increase accuracy?

    PubMed

    Blower, Sally; Go, Myong-Hyun

    2011-07-19

    Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.

  8. Accuracy Evaluation of a Stereolithographic Surgical Template for Dental Implant Insertion Using 3D Superimposition Protocol.

    PubMed

    Cristache, Corina Marilena; Gurbanescu, Silviu

    2017-01-01

    of this study was to evaluate the accuracy of a stereolithographic template, with sleeve structure incorporated into the design, for computer-guided dental implant insertion in partially edentulous patients. Sixty-five implants were placed in twenty-five consecutive patients with a stereolithographic surgical template. After surgery, digital impression was taken and 3D inaccuracy of implants position at entry point, apex, and angle deviation was measured using an inspection tool software. Mann-Whitney U test was used to compare accuracy between maxillary and mandibular surgical guides. A p value < .05 was considered significant. Mean (and standard deviation) of 3D error at the entry point was 0.798 mm (±0.52), at the implant apex it was 1.17 mm (±0.63), and mean angular deviation was 2.34 (±0.85). A statistically significant reduced 3D error was observed at entry point p = .037, at implant apex p = .008, and also in angular deviation p = .030 in mandible when comparing to maxilla. The surgical template used has proved high accuracy for implant insertion. Within the limitations of the present study, the protocol for comparing a digital file (treatment plan) with postinsertion digital impression may be considered a useful procedure for assessing surgical template accuracy, avoiding radiation exposure, during postoperative CBCT scanning.

  9. High-accuracy microassembly by intelligent vision systems and smart sensor integration

    NASA Astrophysics Data System (ADS)

    Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael

    2003-10-01

    Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.

  10. Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.

    1993-01-01

    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.

  11. Near-infrared spectroscopy as a complementary age grading and species identification tool for African malaria vectors

    USDA-ARS?s Scientific Manuscript database

    Near-infrared spectroscopy (NIRS) was recently applied to age-grade and differentiate laboratory reared Anopheles gambiae sensu strico and Anopheles arabiensis sibling species of Anopheles gambiae sensu lato. In this study, we report further on the accuracy of this tool in simultaneously estimating ...

  12. Different predictors of multiple-target search accuracy between nonprofessional and professional visual searchers.

    PubMed

    Biggs, Adam T; Mitroff, Stephen R

    2014-01-01

    Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy.

  13. Diagnosing Chronic Pancreatitis: Comparison and Evaluation of Different Diagnostic Tools.

    PubMed

    Issa, Yama; van Santvoort, Hjalmar C; van Dieren, Susan; Besselink, Marc G; Boermeester, Marja A; Ahmed Ali, Usama

    2017-10-01

    This study aims to compare the M-ANNHEIM, Büchler, and Lüneburg diagnostic tools for chronic pancreatitis (CP). A cross-sectional analysis of the development of CP was performed in a prospectively collected multicenter cohort including 669 patients after a first episode of acute pancreatitis. We compared the individual components of the M-ANNHEIM, Büchler, and Lüneburg tools, the agreement between tools, and estimated diagnostic accuracy using Bayesian latent-class analysis. A total of 669 patients with acute pancreatitis followed-up for a median period of 57 (interquartile range, 42-70) months were included. Chronic pancreatitis was diagnosed in 50 patients (7%), 59 patients (9%), and 61 patients (9%) by the M-ANNHEIM, Lüneburg, and Büchler tools, respectively. The overall agreement between these tools was substantial (κ = 0.75). Differences between the tools regarding the following criteria led to significant changes in the total number of diagnoses of CP: abdominal pain, recurrent pancreatitis, moderate to marked ductal lesions, endocrine and exocrine insufficiency, pancreatic calcifications, and pancreatic pseudocysts. The Büchler tool had the highest sensitivity (94%), followed by the M-ANNHEIM (87%), and finally the Lüneburg tool (81%). Differences between diagnostic tools for CP are mainly attributed to presence of clinical symptoms, endocrine insufficiency, and certain morphological complications.

  14. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  15. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  16. Children's estimates of food portion size: the development and evaluation of three portion size assessment tools for use with children.

    PubMed

    Foster, E; Matthews, J N S; Lloyd, J; Marshall, L; Mathers, J C; Nelson, M; Barton, K L; Wrieden, W L; Cornelissen, P; Harris, J; Adamson, A J

    2008-01-01

    A number of methods have been developed to assist subjects in providing an estimate of portion size but their application in improving portion size estimation by children has not been investigated systematically. The aim was to develop portion size assessment tools for use with children and to assess the accuracy of children's estimates of portion size using the tools. The tools were food photographs, food models and an interactive portion size assessment system (IPSAS). Children (n 201), aged 4-16 years, were supplied with known quantities of food to eat, in school. Food leftovers were weighed. Children estimated the amount of each food using each tool, 24 h after consuming the food. The age-specific portion sizes represented were based on portion sizes consumed by children in a national survey. Significant differences were found between the accuracy of estimates using the three tools. Children of all ages performed well using the IPSAS and food photographs. The accuracy and precision of estimates made using the food models were poor. For all tools, estimates of the amount of food served were more accurate than estimates of the amount consumed. Issues relating to reporting of foods left over which impact on estimates of the amounts of foods actually consumed require further study. The IPSAS has shown potential for assessment of dietary intake with children. Before practical application in assessment of dietary intake of children the tool would need to be expanded to cover a wider range of foods and to be validated in a 'real-life' situation.

  17. Assessment of Near-Field Sonic Boom Simulation Tools

    NASA Technical Reports Server (NTRS)

    Casper, J. H.; Cliff, S. E.; Thomas, S. D.; Park, M. A.; McMullen, M. S.; Melton, J. E.; Durston, D. A.

    2008-01-01

    A recent study for the Supersonics Project, within the National Aeronautics and Space Administration, has been conducted to assess current in-house capabilities for the prediction of near-field sonic boom. Such capabilities are required to simulate the highly nonlinear flow near an aircraft, wherein a sonic-boom signature is generated. There are many available computational fluid dynamics codes that could be used to provide the near-field flow for a sonic boom calculation. However, such codes have typically been developed for applications involving aerodynamic configuration, for which an efficiently generated computational mesh is usually not optimum for a sonic boom prediction. Preliminary guidelines are suggested to characterize a state-of-the-art sonic boom prediction methodology. The available simulation tools that are best suited to incorporate into that methodology are identified; preliminary test cases are presented in support of the selection. During this phase of process definition and tool selection, parallel research was conducted in an attempt to establish criteria that link the properties of a computational mesh to the accuracy of a sonic boom prediction. Such properties include sufficient grid density near shocks and within the zone of influence, which are achieved by adaptation and mesh refinement strategies. Prediction accuracy is validated by comparison with wind tunnel data.

  18. On-line Tool Wear Detection on DCMT070204 Carbide Tool Tip Based on Noise Cutting Audio Signal using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Prasetyo, T.; Amar, S.; Arendra, A.; Zam Zami, M. K.

    2018-01-01

    This study develops an on-line detection system to predict the wear of DCMT070204 tool tip during the cutting process of the workpiece. The machine used in this research is CNC ProTurn 9000 to cut ST42 steel cylinder. The audio signal has been captured using the microphone placed in the tool post and recorded in Matlab. The signal is recorded at the sampling rate of 44.1 kHz, and the sampling size of 1024. The recorded signal is 110 data derived from the audio signal while cutting using a normal chisel and a worn chisel. And then perform signal feature extraction in the frequency domain using Fast Fourier Transform. Feature selection is done based on correlation analysis. And tool wear classification was performed using artificial neural networks with 33 input features selected. This artificial neural network is trained with back propagation method. Classification performance testing yields an accuracy of 74%.

  19. Accuracy Study of the Space-Time CE/SE Method for Computational Aeroacoustics Problems Involving Shock Waves

    NASA Technical Reports Server (NTRS)

    Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    1999-01-01

    The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.

  20. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.

  1. The effect of stimulus strength on the speed and accuracy of a perceptual decision.

    PubMed

    Palmer, John; Huk, Alexander C; Shadlen, Michael N

    2005-05-02

    Both the speed and the accuracy of a perceptual judgment depend on the strength of the sensory stimulation. When stimulus strength is high, accuracy is high and response time is fast; when stimulus strength is low, accuracy is low and response time is slow. Although the psychometric function is well established as a tool for analyzing the relationship between accuracy and stimulus strength, the corresponding chronometric function for the relationship between response time and stimulus strength has not received as much consideration. In this article, we describe a theory of perceptual decision making based on a diffusion model. In it, a decision is based on the additive accumulation of sensory evidence over time to a bound. Combined with simple scaling assumptions, the proportional-rate and power-rate diffusion models predict simple analytic expressions for both the chronometric and psychometric functions. In a series of psychophysical experiments, we show that this theory accounts for response time and accuracy as a function of both stimulus strength and speed-accuracy instructions. In particular, the results demonstrate a close coupling between response time and accuracy. The theory is also shown to subsume the predictions of Piéron's Law, a power function dependence of response time on stimulus strength. The theory's analytic chronometric function allows one to extend theories of accuracy to response time.

  2. A bibliometric analysis of evaluative medical education studies: characteristics and indexing accuracy.

    PubMed

    Sampson, Margaret; Horsley, Tanya; Doja, Asif

    2013-03-01

    To determine the characteristics of medical education studies published in general and internal medicine (GIM) and medical education journals, and to analyze the accuracy of their indexing. The authors identified the five GIM and five medical education journals that published the most articles indexed in MEDLINE as medical education during January 2001 to January 2010. They searched Ovid MEDLINE for evaluative medical education studies published in these journals during this period and classified them as quantitative or qualitative studies according to MEDLINE indexing. They also examined themes and learner levels targeted. Using a random sample of records, they assessed the accuracy of study-type indexing. Of 4,418 records retrieved, 3,853 (87.2%) were from medical education journals and 565 (12.3%) were from GIM journals. Qualitative studies and program evaluations were more prevalent within medical education journals, whereas GIM journals published a higher proportion of clinical trials and systematic reviews (χ=74.28, df=3, P<.001). Medical education journals had a concentration of studies targeting medical students, whereas GIM journals had a concentration targeting residents; themes were similar. The authors confirmed that 170 (56.7%) of the 300 sampled articles were correctly classified in MEDLINE as evaluative studies. The majority of the identified evaluative studies were published in medical education journals, confirming the integrity of medical education as a specialty. Findings concerning the study types published in medical education versus GIM journals are important for medical education researchers who seek to publish outside the field's specialty journals.

  3. Assessment of dysglycemia risk in the Kitikmeot region of Nunavut: using the CANRISK tool

    PubMed Central

    Ying, Jiang; Susan, Rogers Van Katwyk; Yang, Mao; Heather, Orpana; Gina, Agarwal; Margaret, de Groh; Monique, Skinner; Robyn, Clarke

    2017-01-01

    Abstract Introduction: The Public Health Agency of Canada adapted a Finnish diabetes screening tool (FINDRISC) to create a tool (CANRISK) tailored to Canada’s multi-ethnic population. CANRISK was developed using data collected in seven Canadian provinces. In an effort to extend the applicability of CANRISK to northern territorial populations, we completed a study with the mainly Inuit population in the Kitikmeot region of Nunavut. Methods: We obtained CANRISK questionnaires, physical measures and blood samples from participants in five Nunavut communities in Kitikmeot. We used logistic regression to test model fit using the original CANRISK risk factors for dysglycemia (prediabetes and diabetes). Dysglycemia was assessed using fasting plasma glucose (FPG) alone and/or oral glucose tolerance test. We generated participants’ CANRISK scores to test the functioning of this tool in the Inuit population. Results: A total of 303 individuals participated in the study. Half were aged less than 45 years, two-thirds were female and 84% were Inuit. A total of 18% had prediabetes, and an additional 4% had undiagnosed diabetes. The odds of having dysglycemia rose exponentially with age, while the relationship with BMI was U-shaped. Compared with lab test results, using a cut-off point of 32 the CANRISK tool achieved a sensitivity of 61%, a specificity of 66%, a positive predictive value of 34% and an accuracy rate of 65%. Conclusion: The CANRISK tool achieved a similar accuracy in detecting dysglycemia in this mainly Inuit population as it did in a multi-ethnic sample of Canadians. We found the CANRISK tool to be adaptable to the Kitikmeot region, and more generally to Nunavut. PMID:28402800

  4. EM-navigated catheter placement for gynecologic brachytherapy: an accuracy study

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Damato, Antonio; Pernelle, Guillaume; Barber, Lauren; Farhat, Nabgha; Viswanathan, Akila; Cormack, Robert; Kapur, Tina

    2014-03-01

    Gynecologic malignancies, including cervical, endometrial, ovarian, vaginal and vulvar cancers, cause significant mortality in women worldwide. The standard care for many primary and recurrent gynecologic cancers consists of chemoradiation followed by brachytherapy. In high dose rate (HDR) brachytherapy, intracavitary applicators and /or interstitial needles are placed directly inside the cancerous tissue so as to provide catheters to deliver high doses of radiation. Although technology for the navigation of catheters and needles is well developed for procedures such as prostate biopsy, brain biopsy, and cardiac ablation, it is notably lacking for gynecologic HDR brachytherapy. Using a benchtop study that closely mimics the clinical interstitial gynecologic brachytherapy procedure, we developed a method for evaluating the accuracy of image-guided catheter placement. Future bedside translation of this technology offers the potential benefit of maximizing tumor coverage during catheter placement while avoiding damage to the adjacent organs, for example bladder, rectum and bowel. In the study, two independent experiments were performed on a phantom model to evaluate the targeting accuracy of an electromagnetic (EM) tracking system. The procedure was carried out using a laptop computer (2.1GHz Intel Core i7 computer, 8GB RAM, Windows 7 64-bit), an EM Aurora tracking system with a 1.3mm diameter 6 DOF sensor, and 6F (2 mm) brachytherapy catheters inserted through a Syed-Neblett applicator. The 3D Slicer and PLUS open source software were used to develop the system. The mean of the targeting error was less than 2.9mm, which is comparable to the targeting errors in commercial clinical navigation systems.

  5. The accuracy of colonoscopic localisation of colorectal tumours: a prospective, multi-centred observational study.

    PubMed

    Johnstone, M S; Moug, S J

    2014-05-01

    Colonoscopy is essential for accurate pre-operative colorectal tumour localisation, but its accuracy for localisation remains undetermined due to limitations of previous work. This study aimed to establish the accuracy of colonoscopic localisation and to determine how frequently inaccuracy results in altered surgical management. A prospective, multi-centred, powered observational study recruited 79 patients with colorectal tumours that underwent curative surgical resection. Patient and colonoscopic factors were recorded. Pre-operative colonoscopic and radiological lesion localisations were compared to intra-operative localisation using pre-defined anatomical bowel segments to determine accuracy, with changes in planned surgical management documented. Colonoscopy accurately located the colorectal tumour in 64/79 patients (81%). Five out of 15 inaccurately located patients required on-table alteration in planned surgical management. Pre-operative imaging was unable to visualise the primary tumour in 23.1% of cases, a finding that was more prevalent amongst bowel screener patients compared to symptomatic patients (45.8% vs. 13%; p = 0.003). Colonoscopic lesion localisation is inaccurate in 19.0% of cases and occurred throughout the colon with a change in on-table surgical management in 6.3%. With CT unable to visualise lesions in just under a quarter of cases, particularly in the screening population, preoperative localisation is heavily reliant on colonoscopy.

  6. On the accuracy of personality judgment: a realistic approach.

    PubMed

    Funder, D C

    1995-10-01

    The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.

  7. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    NASA Astrophysics Data System (ADS)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  8. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    NASA Astrophysics Data System (ADS)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  9. Validation of an explanatory tool for data-fused displays for high-technology future aircraft

    NASA Astrophysics Data System (ADS)

    Fletcher, Georgina C. L.; Shanks, Craig R.; Selcon, Stephen J.

    1996-05-01

    As the number of sensor and data sources in the military cockpit increases, pilots will suffer high levels of workload which could result in reduced performance and the loss of situational awareness. A DRA research program has been investigating the use of data-fused displays in decision support and has developed and laboratory-tested an explanatory tool for displaying information in air combat scenarios. The tool has been designed to provide pictorial explanations of data that maintain situational awareness by involving the pilot in the hostile aircraft threat assessment task. This paper reports a study carried out to validate the success of the explanatory tool in a realistic flight simulation facility. Aircrew were asked to perform a threat assessment task, either with or without the explanatory tool providing information in the form of missile launch success zone envelopes, while concurrently flying a waypoint course within set flight parameters. The results showed that there was a significant improvement (p less than 0.01) in threat assessment accuracy of 30% when using the explanatory tool. This threat assessment performance advantage was achieved without a trade-off with flying task performance. Situational awareness measures showed no general differences between the explanatory and control conditions, but significant learning effects suggested that the explanatory tool makes the task initially more intuitive and hence less demanding on the pilots' attentional resources. The paper concludes that DRA's data-fused explanatory tool is successful at improving threat assessment accuracy in a realistic simulated flying environment, and briefly discusses the requirements for further research in the area.

  10. Accuracy of Blood Pressure-to-Height Ratio to Define Elevated Blood Pressure in Children and Adolescents: The CASPIAN-IV Study.

    PubMed

    Kelishadi, Roya; Bahreynian, Maryam; Heshmat, Ramin; Motlagh, Mohammad Esmail; Djalalinia, Shirin; Naji, Fatemeh; Ardalan, Gelayol; Asayesh, Hamid; Qorbani, Mostafa

    2016-02-01

    The aim of this study was to propose a simple practical diagnostic criterion for pre-hypertension (pre-HTN) and hypertension (HTN) in the pediatric age group. This study was conducted on a nationally representative sample of 14,880 students, aged 6-18 years. HTN and pre-HTN were defined as systolic blood pressure (SBP) and/or diastolic blood pressure (DBP) ≥ 95 and 90-95th percentile for age, gender, and height, respectively. By using the area under the curve (AUC) of the receiver operator characteristic curves, we estimated the diagnostic accuracy of two indexes of SBP-to-height ratio (SBPHR) and DBP-to-height (DBPHR) to define pre-HTN and HTN. Overall, SBPHR performed relatively well in classifying subjects to HTN (AUC 0.80-0.85) and pre-HTN (AUC 0.84-0.90). Likewise, DBPHR performed relatively well in classifying subjects to HTN (AUC 0.90-0.97) and pre-HTN (AUC 0.70-0.83). Two indexes of SBPHR and DBPHR are considered as valid, simple, inexpensive, and accurate tools to diagnose pre-HTN and HTN in pediatric age group.

  11. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.

  12. Managing complex research datasets using electronic tools: a meta-analysis exemplar.

    PubMed

    Brown, Sharon A; Martin, Ellen E; Garcia, Theresa J; Winter, Mary A; García, Alexandra A; Brown, Adama; Cuevas, Heather E; Sumlin, Lisa L

    2013-06-01

    Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, for example, EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process as well as enhancing communication among research team members. The purpose of this article is to describe the electronic processes designed, using commercially available software, for an extensive, quantitative model-testing meta-analysis. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to decide on which electronic tools to use, determine how these tools would be used, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members.

  13. Configuration optimization and experimental accuracy evaluation of a bone-attached, parallel robot for skull surgery.

    PubMed

    Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias

    2016-03-01

    Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.

  14. The Accuracy of INECO Frontal Screening in the Diagnosis of Executive Dysfunction in Frontotemporal Dementia and Alzheimer Disease.

    PubMed

    Bahia, Valéria S; Cecchini, Mário A; Cassimiro, Luciana; Viana, Rene; Lima-Silva, Thais B; de Souza, Leonardo Cruz; Carvalho, Viviane Amaral; Guimarães, Henrique C; Caramelli, Paulo; Balthazar, Márcio L F; Damasceno, Benito; Brucki, Sônia M D; Nitrini, Ricardo; Yassuda, Mônica S

    2018-05-04

    Executive dysfunction is a common symptom in neurodegenerative disorders and is in need of easy-to-apply screening tools that might identify it. The aims of the present study were to examine some of the psychometric characteristics of the Brazilian version of the INECO frontal screening (IFS), and to investigate its accuracy to diagnose executive dysfunction in dementia and its accuracy to differentiate Alzheimer disease (AD) from the behavioral variant of frontotemporal dementia (bvFTD). Patients diagnosed with bvFTD (n=18) and AD (n=20), and 15 healthy controls completed a neuropsychological battery, the Neuropsychiatric Inventory, the Cornell Scale for Depression in Dementia, the Clinical Dementia Rating, and the IFS. The IFS had acceptable internal consistency (α=0.714) and was significantly correlated with general cognitive measures and with neuropsychological tests. The IFS had adequate accuracy to differentiate patients with dementia from healthy controls (AUC=0.768, cutoff=19.75, sensitivity=0.80, specificity=0.63), but low accuracy to differentiate bvFTD from AD (AUC=0.594, cutoff=16.75, sensitivity=0.667, specificity=0.600). The present study suggested that the IFS may be used to screen for executive dysfunction in dementia. Nonetheless, it should be used with caution in the differential diagnosis between AD and bvFTD.

  15. Screening of hearing in elderly people: assessment of accuracy and reproducibility of the whispered voice test.

    PubMed

    Labanca, Ludimila; Guimarães, Fernando Sales; Costa-Guarisco, Letícia Pimenta; Couto, Erica de Araújo Brandão; Gonçalves, Denise Utsch

    2017-11-01

    Given the high prevalence of presbycusis and its detrimental effect on quality of life, screening tests can be useful tools for detecting hearing loss in primary care settings. This study therefore aimed to determine the accuracy and reproducibility of the whispered voice test as a screening method for detecting hearing impairment in older people. This cross-sectional study was carried out with 210 older adults aged between 60 and 97 years who underwent the whispered voice test employing ten different phrases and using audiometry as a reference test. Sensitivity, specificity and positive and negative predictive values were calculated and accuracy was measured by calculating the area under the ROC curve. The test was repeated on 20% of the ears by a second examiner to assess inter-examiner reproducibility (IER). The words and phrases that showed the highest area under the curve (AUC) and IER values were: "shoe" (AUC = 0.918; IER = 0.877), "window" (AUC = 0.917; IER = 0.869), "it looks like it's going to rain" (AUC = 0.911; IER = 0.810), and "the bus is late" (AUC = 0.900; IER = 0.810), demonstrating that the whispered voice test is a useful screening tool for detecting hearing loss among older people. It is proposed that these words and phrases should be incorporated into the whispered voice test protocol.

  16. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study.

    PubMed

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos Mb; Krijnen, Wim P; van der Schans, Cees P

    2012-08-01

    This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses.Knowledge sources can support nurses in deriving diagnoses. A nurse's disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. A randomised factorial design was used in 2008-2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse's age and the reasoning skills of `deduction' and `analysis'. Improving nurses' dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses.

  17. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study

    PubMed Central

    2012-01-01

    Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577

  18. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  19. Mid-upper arm circumference as a screening tool for identifying children with obesity: a 12-country study.

    PubMed

    Chaput, J-P; Katzmarzyk, P T; Barnes, J D; Fogelholm, M; Hu, G; Kuriyan, R; Kurpad, A; Lambert, E V; Maher, C; Maia, J; Matsudo, V; Olds, T; Onywera, V; Sarmiento, O L; Standage, M; Tudor-Locke, C; Zhao, P; Tremblay, M S

    2017-12-01

    No studies have examined if mid-upper arm circumference (MUAC) can be an alternative screening tool for obesity in an international sample of children differing widely in levels of human development. Our aim is to determine whether MUAC could be used to identify obesity in children from 12 countries in five major geographic regions of the world. This observational, multinational cross-sectional study included 7337 children aged 9-11 years. Anthropometric measurements were objectively assessed, and obesity was defined according to the World Health Organization reference data. In the total sample, MUAC was strongly correlated with adiposity indicators in both boys and girls (r > 0.86, p < 0.001). The accuracy level of MUAC for identifying obesity was high in both sexes and across study sites (overall area under the curve of 0.97, sensitivity of 95% and specificity of 90%). The MUAC cut-off value to identify obesity was ~25 cm for both boys and girls. In country-specific analyses, the cut-off value to identify obesity ranged from 23.2 cm (boys in South Africa) to 26.2 cm (girls in the UK). Results from this 12-country study suggest that MUAC is a simple and accurate measurement that may be used to identify obesity in children aged 9-11 years. MUAC may be a promising screening tool for obesity in resource-limited settings. © 2016 World Obesity Federation.

  20. Review of quality assessment tools for the evaluation of pharmacoepidemiological safety studies

    PubMed Central

    Neyarapally, George A; Hammad, Tarek A; Pinheiro, Simone P; Iyasu, Solomon

    2012-01-01

    Objectives Pharmacoepidemiological studies are an important hypothesis-testing tool in the evaluation of postmarketing drug safety. Despite the potential to produce robust value-added data, interpretation of findings can be hindered due to well-recognised methodological limitations of these studies. Therefore, assessment of their quality is essential to evaluating their credibility. The objective of this review was to evaluate the suitability and relevance of available tools for the assessment of pharmacoepidemiological safety studies. Design We created an a priori assessment framework consisting of reporting elements (REs) and quality assessment attributes (QAAs). A comprehensive literature search identified distinct assessment tools and the prespecified elements and attributes were evaluated. Primary and secondary outcome measures The primary outcome measure was the percentage representation of each domain, RE and QAA for the quality assessment tools. Results A total of 61 tools were reviewed. Most tools were not designed to evaluate pharmacoepidemiological safety studies. More than 50% of the reviewed tools considered REs under the research aims, analytical approach, outcome definition and ascertainment, study population and exposure definition and ascertainment domains. REs under the discussion and interpretation, results and study team domains were considered in less than 40% of the tools. Except for the data source domain, quality attributes were considered in less than 50% of the tools. Conclusions Many tools failed to include critical assessment elements relevant to observational pharmacoepidemiological safety studies and did not distinguish between REs and QAAs. Further, there is a lack of considerations on the relative weights of different domains and elements. The development of a quality assessment tool would facilitate consistent, objective and evidence-based assessments of pharmacoepidemiological safety studies. PMID:23015600

  1. L2 Speaking Development during Study Abroad: Fluency, Accuracy, Complexity, and Underlying Cognitive Factors

    ERIC Educational Resources Information Center

    Leonard, Karen Ruth; Shea, Christine E.

    2017-01-01

    We take a multidimensional perspective on the development of second language (L2) speaking ability and examine how changes in the underlying cognitive variables of linguistic knowledge and processing speed interact with complexity, fluency, and accuracy over the course of a 3-month Spanish study abroad session. Study abroad provides a unique…

  2. Social class, contextualism, and empathic accuracy.

    PubMed

    Kraus, Michael W; Côté, Stéphane; Keltner, Dacher

    2010-11-01

    Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment. The implications of class-based patterns in empathic accuracy for well-being and relationship outcomes are discussed.

  3. SU-F-T-405: Development of a Rapid Cardiac Contouring Tool Using Landmark-Driven Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelletier, C; Jung, J; Mosher, E

    2016-06-15

    Purpose: This study aims to develop a tool to rapidly delineate cardiac substructures for use in dosimetry for large-scale clinical trial or epidemiological investigations. The goal is to produce a system that can semi-automatically delineate nine cardiac structures to a reasonable accuracy within a couple of minutes. Methods: The cardiac contouring tool employs a Most Similar Atlas method, where a selection criterion is used to pre-select the most similar model to the patient from a library of pre-defined atlases. Sixty contrast-enhanced cardiac computed tomography angiography (CTA) scans (30 male and 30 female) were manually contoured to serve as the atlasmore » library. For each CTA 12 structures were delineated. Kabsch algorithm was used to compute the optimum rotation and translation matrices between the patient and atlas. Minimum root mean squared distance between the patient and atlas after transformation was used to select the most-similar atlas. An initial study using 10 CTA sets was performed to assess system feasibility. Leave-one patient out method was performed, and fit criteria were calculated to evaluate the fit accuracy compared to manual contours. Results: For the pilot study, mean dice indices of .895 were achieved for the whole heart, .867 for the ventricles, and .802 for the atria. In addition, mean distance was measured via the chord length distribution (CLD) between ground truth and the atlas structures for the four coronary arteries. The mean CLD for all coronary arteries was below 14mm, with the left circumflex artery showing the best agreement (7.08mm). Conclusion: The cardiac contouring tool is able to delineate cardiac structures with reasonable accuracy in less than 90 seconds. Pilot data indicates that the system is able to delineate the whole heart and ventricles within a reasonable accuracy using even a limited library. We are extending the atlas sets to 60 adult males and females in total.« less

  4. Predictive accuracy of the Historical-Clinical-Risk Management-20 for violence in forensic psychiatric wards in Japan.

    PubMed

    Arai, Kaoru; Takano, Ayumi; Nagata, Takako; Hirabayashi, Naotsugu

    2017-12-01

    Most structured assessment tools for assessing risk of violence were developed in Western countries, and evidence for their effectiveness is not well established in Asian countries. Our aim was to examine the predictive accuracy of the Historical-Clinical-Risk Management-20 (HCR-20) for violence in forensic mental health inpatient units in Japan. A retrospective record study was conducted with a complete 2008-2013 cohort of forensic psychiatric inpatients at the National Center Hospital of Neurology and Psychiatry, Tokyo. Forensic psychiatrists were trained in use of the HCR-20 and asked to complete it as part of their admission assessment. The completed forms were then retained by the researchers and not used in clinical practice; for this, clinicians relied solely on national legally required guidelines. Violent outcomes were determined at 3 and 6 months after the assessment. Receiver operating characteristic analysis was used to calculate the predictive accuracy of the HCR-20 for violence. Area under the curve analyses suggested that the HCR-20 total score is a good predictor of violence in this cohort, with the clinical and risk sub-scales showing good predictive accuracy, but the historical sub-scale not doing so. Area under the curve figures were similar at 3 months and at 6 months. Our results are consistent with studies previously conducted in Western countries. This suggests that the HCR-20 is an effective tool for supporting risk of violence assessment in Japanese forensic psychiatric wards. Its widespread use in clinical practice could enhance safety and would certainly promote transparency in risk-related decision-making. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review

    PubMed Central

    Page, Matthew J; McKenzie, Joanne E; Higgins, Julian P T

    2018-01-01

    Background Several scales, checklists and domain-based tools for assessing risk of reporting biases exist, but it is unclear how much they vary in content and guidance. We conducted a systematic review of the content and measurement properties of such tools. Methods We searched for potentially relevant articles in Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar from inception to February 2017. One author screened all titles, abstracts and full text articles, and collected data on tool characteristics. Results We identified 18 tools that include an assessment of the risk of reporting bias. Tools varied in regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular result within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at ‘high’ risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is unclear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as ‘high’ risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools. Conclusion There are several limitations of existing tools for assessing risk of reporting biases, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations. PMID:29540417

  6. Evaluating the Utility of Web-Based Consumer Support Tools Using Rough Sets

    NASA Astrophysics Data System (ADS)

    Maciag, Timothy; Hepting, Daryl H.; Slezak, Dominik; Hilderman, Robert J.

    On the Web, many popular e-commerce sites provide consumers with decision support tools to assist them in their commerce-related decision-making. Many consumers will rank the utility of these tools quite highly. Data obtained from web usage mining analyses, which may provide knowledge about a user's online experiences, could help indicate the utility of these tools. This type of analysis could provide insight into whether provided tools are adequately assisting consumers in conducting their online shopping activities or if new or additional enhancements need consideration. Although some research in this regard has been described in previous literature, there is still much that can be done. The authors of this paper hypothesize that a measurement of consumer decision accuracy, i.e. a measurement preferences, could help indicate the utility of these tools. This paper describes a procedure developed towards this goal using elements of rough set theory. The authors evaluated the procedure using two support tools, one based on a tool developed by the US-EPA and the other developed by one of the authors called cogito. Results from the evaluation did provide interesting insights on the utility of both support tools. Although it was shown that the cogito tool obtained slightly higher decision accuracy, both tools could be improved from additional enhancements. Details of the procedure developed and results obtained from the evaluation will be provided. Opportunities for future work are also discussed.

  7. Developing a Social Autopsy Tool for Dengue Mortality: A Pilot Study

    PubMed Central

    Arauz, María José; Ridde, Valéry; Hernández, Libia Milena; Charris, Yaneth; Carabali, Mabel; Villar, Luis Ángel

    2015-01-01

    Background Dengue fever is a public health problem in the tropical and sub-tropical world. Dengue cases have grown dramatically in recent years as well as dengue mortality. Colombia has experienced periodic dengue outbreaks with numerous dengue related-deaths, where the Santander department has been particularly affected. Although social determinants of health (SDH) shape health outcomes, including mortality, it is not yet understood how these affect dengue mortality. The aim of this pilot study was to develop and pre-test a social autopsy (SA) tool for dengue mortality. Methods and Findings The tool was developed and pre-tested in three steps. First, dengue fatal cases and ‘near misses’ (those who recovered from dengue complications) definitions were elaborated. Second, a conceptual framework on determinants of dengue mortality was developed to guide the construction of the tool. Lastly, the tool was designed and pre-tested among three relatives of fatal cases and six near misses in 2013 in the metropolitan zone of Bucaramanga. The tool turned out to be practical in the context of dengue mortality in Colombia after some modifications. The tool aims to study the social, individual, and health systems determinants of dengue mortality. The tool is focused on studying the socioeconomic position and the intermediary SDH rather than the socioeconomic and political context. Conclusions The SA tool is based on the scientific literature, a validated conceptual framework, researchers’ and health professionals’ expertise, and a pilot study. It is the first time that a SA tool has been created for the dengue mortality context. Our work furthers the study on SDH and how these are applied to neglected tropical diseases, like dengue. This tool could be integrated in surveillance systems to provide complementary information on the modifiable and avoidable death-related factors and therefore, be able to formulate interventions for dengue mortality reduction. PMID:25658485

  8. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  9. Feature instructions improve face-matching accuracy

    PubMed Central

    Bindemann, Markus

    2018-01-01

    Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822

  10. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  11. A scrutiny of tools used for assessment of hospital disaster preparedness in Iran.

    PubMed

    Heidaranlu, Esmail; Ebadi, Abbas; Ardalan, Ali; Khankeh, Hamidreza

    2015-01-01

    In emergencies and disasters, hospitals are among the first and most vital organizations involved. To determine preparedness of a hospital to deal with crisis, health system requires tools compatible with the type of crisis. The present study aimed to evaluate the accuracy of tools used for assessment of hospitals preparedness for major emergencies and disasters in Iran. In this review study, all studies conducted on hospital preparedness to deal with disasters in Iran in the interim 2000-2015 were examined. The World Health Organization (WHO) criteria were used to assess focus of studies for entry in this study. Of the 36 articles obtained, 28 articles that met inclusion criteria were analyzed. In accordance with the WHO standards, focus of tools used was examined in three areas (structural, nonstructural, and functional). In nonstructural area, the most focus of preparation tools was on medical gases, and the least focus on office and storeroom furnishings and equipment. In the functional area, the most focus was on operational plan, and the least on business continuity. Half of the tools in domestic studies considered structural safety as indicator of hospital preparedness. The present study showed that tools used contain a few indicators approved by the WHO, especially in the functional area. Moreover, a lack of a standard indigenous tool was evident, especially in the functional area. Thus, to assess hospital disaster preparedness, the national health system requires new tools compatible with scientific tool design principles, to enable a more accurate prediction of hospital preparedness in disasters before they occur.

  12. An online tool for tracking soil nitrogen

    NASA Astrophysics Data System (ADS)

    Wang, J.; Umar, M.; Banger, K.; Pittelkow, C. M.; Nafziger, E. D.

    2016-12-01

    Near real-time crop models can be useful tools for optimizing agricultural management practices. For example, model simulations can potentially provide current estimates of nitrogen availability in soil, helping growers decide whether more nitrogen needs to be applied in a given season. Traditionally, crop models have been used at point locations (i.e. single fields) with homogenous soil, climate and initial conditions. However, nitrogen availability across fields with varied weather and soil conditions at a regional or national level is necessary to guide better management decisions. This study presents the development of a publicly available, online tool that automates the integration of high-spatial-resolution forecast and past weather and soil data in DSSAT to estimate nitrogen availability for individual fields in Illinois. The model has been calibrated with field experiments from past year at six research corn fields across Illinois. These sites were treated with applications of different N fertilizer timings and amounts. The tool requires minimal management information from growers and yet has the capability to simulate nitrogen-water-crop interactions with calibrated parameters that are more appropriate for Illinois. The results from the tool will be combined with incoming field experiment data from 2016 for model validation and further improvement of model's predictive accuracy. The tool has the potential to help guide better nitrogen management practices to maximize economic and environmental benefits.

  13. Using Kepler for Tool Integration in Microarray Analysis Workflows.

    PubMed

    Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C

    Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

  14. The methodological quality of three foundational law enforcement Drug Influence Evaluation validation studies.

    PubMed

    Kane, Greg

    2013-11-04

    A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect's driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice.

  15. Evaluation of the nutrition screening tool for childhood cancer (SCAN).

    PubMed

    Murphy, Alexia J; White, Melinda; Viani, Karina; Mosby, Terezie T

    2016-02-01

    Malnutrition is a serious concern for children with cancer and nutrition screening may offer a simple alternative to nutrition assessment for identifying children with cancer who are at risk of malnutrition. The present paper aimed to evaluate the nutrition screening tool for childhood cancer (SCAN). SCAN was developed after an extensive review of currently available tools and published screening recommendation, consideration of pediatric oncology nutrition guidelines, piloting questions, and consulting with members of International Pediatric Oncology Nutrition Group. In Study 1, the accuracy and validity of SCAN against pediatric subjective global nutrition assessment (pediatric SGNA) was determined. In Study 2, subjects were classified as 'at risk of malnutrition' and 'not at risk of malnutrition' according to SCAN and measures of height, weight, body mass index (BMI) and body composition were compared between the groups. The validation of SCAN against pediatric SGNA showed SCAN had 'excellent' accuracy (0.90, 95% CI 0.78-1.00; p < 0.001), 100% sensitivity, 39% specificity, 56% positive predictive value and 100% negative predictive value. When subjects in Study 2 were classified into 'at risk of malnutrition' and 'not at risk of malnutrition' according to SCAN, the 'at risk of malnutrition' group had significantly lower values for weight Z score (p = 0.001), BMI Z score (p = 0.001) and fat mass index (FMI) (p = 0.04), than the 'not at risk of malnutrition' group. This study shows that SCAN is a simple, quick and valid tool which can be used to identify children with cancer who are at risk of malnutrition. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  16. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  17. In vivo Study of the Accuracy of Dual-arch Impressions.

    PubMed

    de Lima, Luciana Martinelli Santayana; Borges, Gilberto Antonio; Junior, Luiz Henrique Burnett; Spohr, Ana Maria

    2014-06-01

    This study evaluated in vivo the accuracy of metal (Smart®) and plastic (Triple Tray®) dual-arch trays used with vinyl polysiloxane (Flexitime®), in the putty/wash viscosity, as well as polyether (Impregum Soft®) in the regular viscosity. In one patient, an implant-level transfer was screwed on an implant in the mandibular right first molar, serving as a pattern. Ten impressions were made with each tray and impression material. The impressions were poured with Type IV gypsum. The width and height of the pattern and casts were measured in a profile projector (Nikon). The results were submitted to Student's t-test for one sample (α = 0.05). For the width distance, the plastic dual-arch trays with vinyl polysiloxane (4.513 mm) and with polyether (4.531 mm) were statistically wider than the pattern (4.489 mm). The metal dual-arch tray with vinyl polysiloxane (4.504 mm) and with polyether (4.500 mm) did not differ statistically from the pattern. For the height distance, only the metal dual-arch tray with polyether (2.253 mm) differed statistically from the pattern (2.310 mm). The metal dual-arch tray with vinyl polysiloxane, in the putty/wash viscosities, reproduced casts with less distortion in comparison with the same technique with the plastic dual-arch tray. The plastic or metal dual-arch trays with polyether reproduced cast with greater distortion. How to cite the article: Santayana de Lima LM, Borges GA, Burnett LH Jr, Spohr AM. In vivo study of the accuracy of dual-arch impressions. J Int Oral Health 2014;6(3):50-5.

  18. A HTML5 open source tool to conduct studies based on Libet's clock paradigm.

    PubMed

    Garaizar, Pablo; Cubillas, Carmelo P; Matute, Helena

    2016-09-13

    Libet's clock is a well-known procedure in experiments in psychology and neuroscience. Examples of its use include experiments exploring the subjective sense of agency, action-effect binding, and subjective timing of conscious decisions and perceptions. However, the technical details of the apparatus used to conduct these types of experiments are complex, and are rarely explained in sufficient detail as to guarantee an exact replication of the procedure. With this in mind, we developed Labclock Web, a web tool designed to conduct online and offline experiments using Libet's clock. After describing its technical features, we explain how to configure specific experiments using this tool. Its degree of accuracy and precision in the presentation of stimuli has been technically validated, including the use of two cognitive experiments conducted with voluntary participants who performed the experiment both in our laboratory and via the Internet. Labclock Web is distributed without charge under a free software license (GPLv3) since one of our main objectives is to facilitate the replication of experiments and hence the advancement of knowledge in this area.

  19. Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  20. An Initial Study of Airport Arrival Heinz Capacity Benefits Due to Improved Scheduling Accuracy

    NASA Technical Reports Server (NTRS)

    Meyn, Larry; Erzberger, Heinz

    2005-01-01

    The long-term growth rate in air-traffic demand leads to future air-traffic densities that are unmanageable by today's air-traffic control system. I n order to accommodate such growth, new technology and operational methods will be needed in the next generation air-traffic control system. One proposal for such a system is the Automated Airspace Concept (AAC). One of the precepts of AAC is to direct aircraft using trajectories that are sent via an air-ground data link. This greatly improves the accuracy in directing aircraft to specific waypoints at specific times. Studies of the Center-TRACON Automation System (CTAS) have shown that increased scheduling accuracy enables increased arrival capacity at CTAS equipped airports.

  1. Assuring high quality treatment delivery in clinical trials - Results from the Trans-Tasman Radiation Oncology Group (TROG) study 03.04 "RADAR" set-up accuracy study.

    PubMed

    Haworth, Annette; Kearvell, Rachel; Greer, Peter B; Hooton, Ben; Denham, James W; Lamb, David; Duchesne, Gillian; Murray, Judy; Joseph, David

    2009-03-01

    A multi-centre clinical trial for prostate cancer patients provided an opportunity to introduce conformal radiotherapy with dose escalation. To verify adequate treatment accuracy prior to patient recruitment, centres submitted details of a set-up accuracy study (SUAS). We report the results of the SUAS, the variation in clinical practice and the strategies used to help centres improve treatment accuracy. The SUAS required each of the 24 participating centres to collect data on at least 10 pelvic patients imaged on a minimum of 20 occasions. Software was provided for data collection and analysis. Support to centres was provided through educational lectures, the trial quality assurance team and an information booklet. Only two centres had recently carried out a SUAS prior to the trial opening. Systematic errors were generally smaller than those previously reported in the literature. The questionnaire identified many differences in patient set-up protocols. As a result of participating in this QA activity more than 65% of centres improved their treatment delivery accuracy. Conducting a pre-trial SUAS has led to improvement in treatment delivery accuracy in many centres. Treatment techniques and set-up accuracy varied greatly, demonstrating a need to ensure an on-going awareness for such studies in future trials and with the introduction of dose escalation or new technologies.

  2. Solid models for CT/MR image display: accuracy and utility in surgical planning

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; Yue, Alvin; Ammirati, Mario; Kioumehr, Farhad; Turner, Scott

    1991-05-01

    Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. Although this life-size anatomic model is more easily understandable by the surgeon, its accuracy and true surgical utility remain untested. We have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the model with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of 99.6 percent. Because of the ease of exact voxel localization on the model, its precision was high with the standard deviation of measurement of 0.71 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents our accuracy study and discussed ways of assessing the quality of neurosurgical plans when 3-D models a made available as planning tools.

  3. Running accuracy analysis of a 3-RRR parallel kinematic machine considering the deformations of the links

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Jiang, Yao; Li, Tiemin

    2014-09-01

    Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.

  4. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems

    PubMed Central

    Rys, Dawid

    2017-01-01

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy. PMID:28880215

  5. Genome Editing Tools in Plants

    PubMed Central

    Mohanta, Tapan Kumar; Bashir, Tufail; Hashem, Abeer; Bae, Hanhong

    2017-01-01

    Genome editing tools have the potential to change the genomic architecture of a genome at precise locations, with desired accuracy. These tools have been efficiently used for trait discovery and for the generation of plants with high crop yields and resistance to biotic and abiotic stresses. Due to complex genomic architecture, it is challenging to edit all of the genes/genomes using a particular genome editing tool. Therefore, to overcome this challenging task, several genome editing tools have been developed to facilitate efficient genome editing. Some of the major genome editing tools used to edit plant genomes are: Homologous recombination (HR), zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), pentatricopeptide repeat proteins (PPRs), the CRISPR/Cas9 system, RNA interference (RNAi), cisgenesis, and intragenesis. In addition, site-directed sequence editing and oligonucleotide-directed mutagenesis have the potential to edit the genome at the single-nucleotide level. Recently, adenine base editors (ABEs) have been developed to mutate A-T base pairs to G-C base pairs. ABEs use deoxyadeninedeaminase (TadA) with catalytically impaired Cas9 nickase to mutate A-T base pairs to G-C base pairs. PMID:29257124

  6. Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.

    2017-01-01

    Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.

  7. Diagnostic accuracy of the Eurotest for dementia: a naturalistic, multicenter phase II study

    PubMed Central

    Carnero-Pardo, Cristobal; Gurpegui, Manuel; Sanchez-Cantalejo, Emilio; Frank, Ana; Mola, Santiago; Barquero, M Sagrario; Montoro-Rios, M Teresa

    2006-01-01

    Background Available screening tests for dementia are of limited usefulness because they are influenced by the patient's culture and educational level. The Eurotest, an instrument based on the knowledge and handling of money, was designed to overcome these limitations. The objective of this study was to evaluate the diagnostic accuracy of the Eurotest in identifying dementia in customary clinical practice. Methods A cross-sectional, multi-center, naturalistic phase II study was conducted. The Eurotest was administered to consecutive patients, older than 60 years, in general neurology clinics. The patients' condition was classified as dementia or no dementia according to DSM-IV diagnostic criteria. We calculated sensitivity (Sn), specificity (Sp) and area under the ROC curves (aROC) with 95% confidence intervals. The influence of social and educational factors on scores was evaluated with multiple linear regression analysis, and the influence of these factors on diagnostic accuracy was evaluated with logistic regression. Results Sixteen neurologists recruited a total of 516 participants: 101 with dementia, 380 without dementia, and 35 who were excluded. Of the 481 participants who took the Eurotest, 38.7% were totally or functionally illiterate and 45.5% had received no formal education. Mean time needed to administer the test was 8.2+/-2.0 minutes. The best cut-off point was 20/21, with Sn = 0.91 (0.84–0.96), Sp = 0.82 (0.77–0.85), and aROC = 0.93 (0.91–0.95). Neither the scores on the Eurotest nor its diagnostic accuracy were influenced by social or educational factors. Conclusion This naturalistic and pragmatic study shows that the Eurotest is a rapid, simple and useful screening instrument, which is free from educational influences, and has appropriate internal and external validity. PMID:16606455

  8. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  9. The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images

    PubMed Central

    Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E.; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J.

    2016-01-01

    Purpose Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. Methods We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. Results In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%–74%) and 87% (95% CI, 86%–88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91–0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. Conclusions This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. Translational Relevance The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver

  10. Comparison of quality control software tools for diffusion tensor imaging.

    PubMed

    Liu, Bilan; Zhu, Tong; Zhong, Jianhui

    2015-04-01

    Image quality of diffusion tensor imaging (DTI) is critical for image interpretation, diagnostic accuracy and efficiency. However, DTI is susceptible to numerous detrimental artifacts that may impair the reliability and validity of the obtained data. Although many quality control (QC) software tools are being developed and are widely used and each has its different tradeoffs, there is still no general agreement on an image quality control routine for DTIs, and the practical impact of these tradeoffs is not well studied. An objective comparison that identifies the pros and cons of each of the QC tools will be helpful for the users to make the best choice among tools for specific DTI applications. This study aims to quantitatively compare the effectiveness of three popular QC tools including DTI studio (Johns Hopkins University), DTIprep (University of North Carolina at Chapel Hill, University of Iowa and University of Utah) and TORTOISE (National Institute of Health). Both synthetic and in vivo human brain data were used to quantify adverse effects of major DTI artifacts to tensor calculation as well as the effectiveness of different QC tools in identifying and correcting these artifacts. The technical basis of each tool was discussed, and the ways in which particular techniques affect the output of each of the tools were analyzed. The different functions and I/O formats that three QC tools provide for building a general DTI processing pipeline and integration with other popular image processing tools were also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Use of Molecular Diagnostic Tools for the Identification of Species Responsible for Snakebite in Nepal: A Pilot Study

    PubMed Central

    Sharma, Sanjib Kumar; Kuch, Ulrich; Höde, Patrick; Bruhse, Laura; Pandey, Deb P.; Ghimire, Anup; Chappuis, François; Alirol, Emilie

    2016-01-01

    Snakebite is an important medical emergency in rural Nepal. Correct identification of the biting species is crucial for clinicians to choose appropriate treatment and anticipate complications. This is particularly important for neurotoxic envenoming which, depending on the snake species involved, may not respond to available antivenoms. Adequate species identification tools are lacking. This study used a combination of morphological and molecular approaches (PCR-aided DNA sequencing from swabs of bite sites) to determine the contribution of venomous and non-venomous species to the snakebite burden in southern Nepal. Out of 749 patients admitted with a history of snakebite to one of three study centres, the biting species could be identified in 194 (25.9%). Out of these, 87 had been bitten by a venomous snake, most commonly the Indian spectacled cobra (Naja naja; n = 42) and the common krait (Bungarus caeruleus; n = 22). When both morphological identification and PCR/sequencing results were available, a 100% agreement was noted. The probability of a positive PCR result was significantly lower among patients who had used inadequate “first aid” measures (e.g. tourniquets or local application of remedies). This study is the first to report the use of forensic genetics methods for snake species identification in a prospective clinical study. If high diagnostic accuracy is confirmed in larger cohorts, this method will be a very useful reference diagnostic tool for epidemiological investigations and clinical studies. PMID:27105074

  12. Predicting the Accuracy of Protein–Ligand Docking on Homology Models

    PubMed Central

    BORDOGNA, ANNALISA; PANDINI, ALESSANDRO; BONATI, LAURA

    2011-01-01

    Ligand–protein docking is increasingly used in Drug Discovery. The initial limitations imposed by a reduced availability of target protein structures have been overcome by the use of theoretical models, especially those derived by homology modeling techniques. While this greatly extended the use of docking simulations, it also introduced the need for general and robust criteria to estimate the reliability of docking results given the model quality. To this end, a large-scale experiment was performed on a diverse set including experimental structures and homology models for a group of representative ligand–protein complexes. A wide spectrum of model quality was sampled using templates at different evolutionary distances and different strategies for target–template alignment and modeling. The obtained models were scored by a selection of the most used model quality indices. The binding geometries were generated using AutoDock, one of the most common docking programs. An important result of this study is that indeed quantitative and robust correlations exist between the accuracy of docking results and the model quality, especially in the binding site. Moreover, state-of-the-art indices for model quality assessment are already an effective tool for an a priori prediction of the accuracy of docking experiments in the context of groups of proteins with conserved structural characteristics. PMID:20607693

  13. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  14. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  15. Key Technical Aspects Influencing the Accuracy of Tablet Subdivision.

    PubMed

    Teixeira, Maíra T; Sá-Barreto, Lívia C L; Gratieri, Taís; Gelfuso, Guilherme M; Silva, Izabel C R; Cunha-Filho, Marcílio S S

    2017-05-01

    Tablet subdivision is a common practice used mainly for dose adjustment. The aim of this study was to investigate how the technical aspects of production as well as the method of tablets subdivision (employing a tablet splitter or a kitchen knife) influence the accuracy of this practice. Five drugs commonly used as subdivided tablets were selected. For each drug, the innovator drug product, a scored-generic and a non-scored generic were investigated totalizing fifteen drug products. Mechanical and physical tests, including image analysis, were performed. Additionally, comparisons were made between tablet subdivision method, score, shape, diluent composition and coating. Image analysis based on surface area was a useful tool as an alternative assay to evaluate the accuracy of tablet subdivision. The tablet splitter demonstrates an advantage relative to a knife as it showed better results in weight loss and friability tests. Oblong, coated and scored tablets had better results after subdivision than round, uncoated and non-scored tablets. The presence of elastic diluents such as starch and dibasic phosphate dehydrate conferred a more appropriate behaviour for the subdivision process than plastic materials such as microcrystalline cellulose and lactose. Finally, differences were observed between generics and their innovator products in all selected drugs with regard the quality control assays in divided tablet, which highlights the necessity of health regulations to consider subdivision performance at least in marketing authorization of generic products.

  16. Comparative Accuracy Evaluation of Fine-Scale Global and Local Digital Surface Models: The Tshwane Case Study I

    NASA Astrophysics Data System (ADS)

    Breytenbach, A.

    2016-10-01

    Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.

  17. Gaussian process regression for tool wear prediction

    NASA Astrophysics Data System (ADS)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  18. Benchmarking of software tools for optical proximity correction

    NASA Astrophysics Data System (ADS)

    Jungmann, Angelika; Thiele, Joerg; Friedrich, Christoph M.; Pforr, Rainer; Maurer, Wilhelm

    1998-06-01

    The point when optical proximity correction (OPC) will become a routine procedure for every design is not far away. For such a daily use the requirements for an OPC tool go far beyond the principal functionality of OPC that was proven by a number of approaches and is documented well in literature. In this paper we first discuss the requirements for a productive OPC tool. Against these requirements a benchmarking was performed with three different OPC tools available on market (OPRX from TVT, OPTISSIMO from aiss and PROTEUS from TMA). Each of these tools uses a different approach to perform the correction (rules, simulation or model). To assess the accuracy of the correction, a test chip was fabricated, which contains corrections done by each software tool. The advantages and weakness of the several solutions are discussed.

  19. Accuracy of pulse oximetry in children.

    PubMed

    Ross, Patrick A; Newth, Christopher J L; Khemani, Robinder G

    2014-01-01

    For children with cyanotic congenital heart disease or acute hypoxemic respiratory failure, providers frequently make decisions based on pulse oximetry, in the absence of an arterial blood gas. The study objective was to measure the accuracy of pulse oximetry in the saturations from pulse oximetry (SpO2) range of 65% to 97%. This institutional review board-approved prospective, multicenter observational study in 5 PICUs included 225 mechanically ventilated children with an arterial catheter. With each arterial blood gas sample, SpO2 from pulse oximetry and arterial oxygen saturations from CO-oximetry (SaO2) were simultaneously obtained if the SpO2 was ≤ 97%. The lowest SpO2 obtained in the study was 65%. In the range of SpO2 65% to 97%, 1980 simultaneous values for SpO2 and SaO2 were obtained. The bias (SpO2 - SaO2) varied through the range of SpO2 values. The bias was greatest in the SpO2 range 81% to 85% (336 samples, median 6%, mean 6.6%, accuracy root mean squared 9.1%). SpO2 measurements were close to SaO2 in the SpO2 range 91% to 97% (901 samples, median 1%, mean 1.5%, accuracy root mean squared 4.2%). Previous studies on pulse oximeter accuracy in children present a single number for bias. This study identified that the accuracy of pulse oximetry varies significantly as a function of the SpO2 range. Saturations measured by pulse oximetry on average overestimate SaO2 from CO-oximetry in the SpO2 range of 76% to 90%. Better pulse oximetry algorithms are needed for accurate assessment of children with saturations in the hypoxemic range.

  20. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography.

    PubMed

    Marino, Miguel; Li, Yi; Rueschman, Michael N; Winkelman, J W; Ellenbogen, J M; Solet, J M; Dulin, Hilary; Berkman, Lisa F; Buxton, Orfeu M

    2013-11-01

    We validated actigraphy for detecting sleep and wakefulness versus polysomnography (PSG). Actigraphy and polysomnography were simultaneously collected during sleep laboratory admissions. All studies involved 8.5 h time in bed, except for sleep restriction studies. Epochs (30-sec; n = 232,849) were characterized for sensitivity (actigraphy = sleep when PSG = sleep), specificity (actigraphy = wake when PSG = wake), and accuracy (total proportion correct); the amount of wakefulness after sleep onset (WASO) was also assessed. A generalized estimating equation (GEE) model included age, gender, insomnia diagnosis, and daytime/nighttime sleep timing factors. Controlled sleep laboratory conditions. Young and older adults, healthy or chronic primary insomniac (PI) patients, and daytime sleep of 23 night-workers (n = 77, age 35.0 ± 12.5, 30F, mean nights = 3.2). N/A. Overall, sensitivity (0.965) and accuracy (0.863) were high, whereas specificity (0.329) was low; each was only slightly modified by gender, insomnia, day/night sleep timing (magnitude of change < 0.04). Increasing age slightly reduced specificity. Mean WASO/night was 49.1 min by PSG compared to 36.8 min/night by actigraphy (β = 0.81; CI = 0.42, 1.21), unbiased when WASO < 30 min/night, and overestimated when WASO > 30 min/night. This validation quantifies strengths and weaknesses of actigraphy as a tool measuring sleep in clinical and population studies. Overall, the participant-specific accuracy is relatively high, and for most participants, above 80%. We validate this finding across multiple nights and a variety of adults across much of the young to midlife years, in both men and women, in those with and without insomnia, and in 77 participants. We conclude that actigraphy is overall a useful and valid means for estimating total sleep time and wakefulness after sleep onset in field and workplace studies, with some limitations in specificity.

  1. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  2. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review.

    PubMed

    Page, Matthew J; McKenzie, Joanne E; Higgins, Julian P T

    2018-03-14

    Several scales, checklists and domain-based tools for assessing risk of reporting biases exist, but it is unclear how much they vary in content and guidance. We conducted a systematic review of the content and measurement properties of such tools. We searched for potentially relevant articles in Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar from inception to February 2017. One author screened all titles, abstracts and full text articles, and collected data on tool characteristics. We identified 18 tools that include an assessment of the risk of reporting bias. Tools varied in regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular result within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at 'high' risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is unclear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as 'high' risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools. There are several limitations of existing tools for assessing risk of reporting biases, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial

  3. Nutritional Risk Screening 2002, Short Nutritional Assessment Questionnaire, Malnutrition Screening Tool, and Malnutrition Universal Screening Tool Are Good Predictors of Nutrition Risk in an Emergency Service.

    PubMed

    Rabito, Estela Iraci; Marcadenti, Aline; da Silva Fink, Jaqueline; Figueira, Luciane; Silva, Flávia Moraes

    2017-08-01

    There is an international consensus that nutrition screening be performed at the hospital; however, there is no "best tool" for screening of malnutrition risk in hospitalized patients. To evaluate (1) the accuracy of the MUST (Malnutrition Universal Screening Tool), MST (Malnutrition Screening Tool), and SNAQ (Short Nutritional Assessment Questionnaire) in comparison with the NRS-2002 (Nutritional Risk Screening 2002) to identify patients at risk of malnutrition and (2) the ability of these nutrition screening tools to predict morbidity and mortality. A specific questionnaire was administered to complete the 4 screening tools. Outcomes measures included length of hospital stay, transfer to the intensive care unit, presence of infection, and incidence of death. A total of 752 patients were included. The nutrition risk was 29.3%, 37.1%, 33.6%, and 31.3% according to the NRS-2002, MUST, MST, and SNAQ, respectively. All screening tools showed satisfactory performance to identify patients at nutrition risk (area under the receiver operating characteristic curve between 0.765-0.808). Patients at nutrition risk showed higher risk of very long length of hospital stay as compared with those not at nutrition risk, independent of the tool applied (relative risk, 1.35-1.78). Increased risk of mortality (2.34 times) was detected by the MUST. The MUST, MST, and SNAQ share similar accuracy to the NRS-2002 in identifying risk of malnutrition, and all instruments were positively associated with very long hospital stay. In clinical practice, the 4 tools could be applied, and the choice for one of them should be made per the particularities of the service.

  4. The Effects of Alcohol Intoxication on Accuracy and the Confidence–Accuracy Relationship in Photographic Simultaneous Line‐ups

    PubMed Central

    Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.

    2017-01-01

    Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426

  5. Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.

    PubMed

    Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim

    2017-09-01

    Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0

  6. Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker

    PubMed Central

    Aguilar, Juan José

    2014-01-01

    This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744

  7. Longitudinal Study: Efficacy of Online Technology Tools for Instructional Use

    NASA Technical Reports Server (NTRS)

    Uenking, Michael D.

    2011-01-01

    Studies show that the student population (secondary and post secondary) is becoming increasingly more technologically savvy. Use of the internet, computers, MP3 players, and other technologies along with online gaming has increased tremendously amongst this population such that it is creating an apparent paradigm shift in the learning modalities of these students. Instructors and facilitators of learning can no longer rely solely on traditional lecture-based lesson formals. In order to achieve student academic success and satisfaction and to increase student retention, instructors must embrace various technology tools that are available and employ them in their lessons. A longitudinal study (January 2009-June 2010) has been performed that encompasses the use of several technology tools in an instructional setting. The study provides further evidence that students not only like the tools that are being used, but prefer that these tools be used to help supplement and enhance instruction.

  8. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the

  9. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  10. Droplet sizing instrumentation used for icing research: Operation, calibration, and accuracy

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1989-01-01

    The accuracy of the Forward Scattering Spectrometer Probe (FSSP) is determined using laboratory tests, wind tunnel comparisons, and computer simulations. Operation in an icing environment is discussed and a new calibration device for the FSSP (the rotating pinhole) is demonstrated to be a valuable tool. Operation of the Optical Array Probe is also presented along with a calibration device (the rotating reticle) which is suitable for performing detailed analysis of that instrument.

  11. Trained student pharmacists' telephonic collection of patient medication information: Evaluation of a structured interview tool.

    PubMed

    Margolis, Amanda R; Martin, Beth A; Mott, David A

    2016-01-01

    To determine the feasibility and fidelity of student pharmacists collecting patient medication list information using a structured interview tool and the accuracy of documenting the information. The medication lists were used by a community pharmacist to provide a targeted medication therapy management (MTM) intervention. Descriptive analysis of patient medication lists collected with telephone interviews. Ten trained student pharmacists collected the medication lists. Trained student pharmacists conducted audio-recorded telephone interviews with 80 English-speaking, community-dwelling older adults using a structured interview tool to collect and document medication lists. Feasibility was measured using the number of completed interviews, the time student pharmacists took to collect the information, and pharmacist feedback. Fidelity to the interview tool was measured by assessing student pharmacists' adherence to asking all scripted questions and probes. Accuracy was measured by comparing the audio-recorded interviews to the medication list information documented in an electronic medical record. On average, it took student pharmacists 26.7 minutes to collect the medication lists. The community pharmacist said the medication lists were complete and that having the medication lists saved time and allowed him to focus on assessment, recommendations, and education during the targeted MTM session. Fidelity was high, with an overall proportion of asked scripted probes of 83.75% (95% confidence interval [CI], 80.62-86.88%). Accuracy was also high for both prescription (95.1%; 95% CI, 94.3-95.8%) and nonprescription (90.5%; 95% CI, 89.4-91.4%) medications. Trained student pharmacists were able to use an interview tool to collect and document medication lists with a high degree of fidelity and accuracy. This study suggests that student pharmacists or trained technicians may be able to collect patient medication lists to facilitate MTM sessions in the community pharmacy

  12. Trained student pharmacists’ telephonic collection of patient medication information: Evaluation of a structured interview tool

    PubMed Central

    Margolis, Amanda R.; Martin, Beth A.; Mott, David A.

    2016-01-01

    Objective To determine the feasibility and fidelity of student pharmacists collecting patient medication list information using a structured interview tool and the accuracy of documenting the information. The medication lists were used by a community pharmacist to provide a targeted medication therapy management (MTM) intervention. Design Descriptive analysis of patient medication lists collected via telephone interviews. Participants 10 trained student pharmacists collected the medication lists. Intervention Trained student pharmacists conducted audio-recorded telephone interviews with 80 English-speaking community dwelling older adults using a structured interview tool to collect and document medication lists. Main outcome measures Feasibility was measured using the number of completed interviews, the time student pharmacists took to collect the information, and pharmacist feedback. Fidelity to the interview tool was measured by assessing student pharmacists’ adherence to asking all scripted questions and probes. Accuracy was measured by comparing the audio recorded interviews to the medication list information documented in an electronic medical record. Results On average it took student pharmacists 26.7 minutes to collect the medication lists. The community pharmacist said the medication lists were complete and that having the medication lists saved time and allowed him to focus on assessment, recommendations, and education during the targeted MTM session. Fidelity was high with an overall proportion of asked scripted probes of 83.75% (95%CI: 80.62–86.88%). Accuracy was also high for both prescription (95.1%, 95%CI: 94.3–95.8%) and non-prescription (90.5%, 95%CI: 89.4–91.4%) medications. Conclusion Trained student pharmacists were able to use an interview tool to collect and document medication lists with a high degree of fidelity and accuracy. This study suggests that student pharmacists or trained technicians may be able to collect patient medication

  13. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage errormore » of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.« less

  14. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    PubMed

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.

  15. Accuracy of Raman spectroscopy in differentiating brain tumor from normal brain tissue.

    PubMed

    Zhang, Jing; Fan, Yimeng; He, Min; Ma, Xuelei; Song, Yanlin; Liu, Ming; Xu, Jianguo

    2017-05-30

    Raman spectroscopy could be applied to distinguish tumor from normal tissues. This meta-analysis was conducted to assess the accuracy of Raman spectroscopy in differentiating brain tumor from normal brain tissue. PubMed and Embase were searched to identify suitable studies prior to Jan 1st, 2016. We estimated the pooled sensitivity, specificity, positive and negative likelihood ratios (LR), diagnostic odds ratio (DOR), and constructed summary receiver operating characteristics (SROC) curves to identity the accuracy of Raman spectroscopy in differentiating brain tumor from normal brain tissue. A total of six studies with 1951 spectra were included. For glioma, the pooled sensitivity and specificity of Raman spectroscopy were 0.96 (95% CI 0.94-0.97) and 0.99 (95% CI 0.98-0.99), respectively. The area under the curve (AUC) was 0.9831. For meningioma, the pooled sensitivity and specificity were 0.98 (95% CI 0.94-1.00) and 1.00 (95% CI 0.98-1.00), respectively. The AUC was 0.9955. This meta-analysis suggested that Raman spectroscopy could be an effective and accurate tool for differentiating glioma and meningioma from normal brain tissue, which would help us both avoid removal of normal tissue and minimize the volume of residual tumor.

  16. User Studies: Developing Learning Strategy Tool Software for Children.

    ERIC Educational Resources Information Center

    Fitzgerald, Gail E.; Koury, Kevin A.; Peng, Hsinyi

    This paper is a report of user studies for developing learning strategy tool software for children. The prototype software demonstrated is designed for children with learning and behavioral disabilities. The tools consist of easy-to-use templates for creating organizational, memory, and learning approach guides for use in classrooms and at home.…

  17. Development of a prototype clinical decision support tool for osteoporosis disease management: a qualitative study of focus groups.

    PubMed

    Kastner, Monika; Li, Jamy; Lottridge, Danielle; Marquez, Christine; Newton, David; Straus, Sharon E

    2010-07-22

    Osteoporosis affects over 200 million people worldwide, and represents a significant cost burden. Although guidelines are available for best practice in osteoporosis, evidence indicates that patients are not receiving appropriate diagnostic testing or treatment according to guidelines. The use of clinical decision support systems (CDSSs) may be one solution because they can facilitate knowledge translation by providing high-quality evidence at the point of care. Findings from a systematic review of osteoporosis interventions and consultation with clinical and human factors engineering experts were used to develop a conceptual model of an osteoporosis tool. We conducted a qualitative study of focus groups to better understand physicians' perceptions of CDSSs and to transform the conceptual osteoporosis tool into a functional prototype that can support clinical decision making in osteoporosis disease management at the point of care. The conceptual design of the osteoporosis tool was tested in 4 progressive focus groups with family physicians and general internists. An iterative strategy was used to qualitatively explore the experiences of physicians with CDSSs; and to find out what features, functions, and evidence should be included in a working prototype. Focus groups were conducted using a semi-structured interview guide using an iterative process where results of the first focus group informed changes to the questions for subsequent focus groups and to the conceptual tool design. Transcripts were transcribed verbatim and analyzed using grounded theory methodology. Of the 3 broad categories of themes that were identified, major barriers related to the accuracy and feasibility of extracting bone mineral density test results and medications from the risk assessment questionnaire; using an electronic input device such as a Tablet PC in the waiting room; and the importance of including well-balanced information in the patient education component of the osteoporosis

  18. Development of a prototype clinical decision support tool for osteoporosis disease management: a qualitative study of focus groups

    PubMed Central

    2010-01-01

    Background Osteoporosis affects over 200 million people worldwide, and represents a significant cost burden. Although guidelines are available for best practice in osteoporosis, evidence indicates that patients are not receiving appropriate diagnostic testing or treatment according to guidelines. The use of clinical decision support systems (CDSSs) may be one solution because they can facilitate knowledge translation by providing high-quality evidence at the point of care. Findings from a systematic review of osteoporosis interventions and consultation with clinical and human factors engineering experts were used to develop a conceptual model of an osteoporosis tool. We conducted a qualitative study of focus groups to better understand physicians' perceptions of CDSSs and to transform the conceptual osteoporosis tool into a functional prototype that can support clinical decision making in osteoporosis disease management at the point of care. Methods The conceptual design of the osteoporosis tool was tested in 4 progressive focus groups with family physicians and general internists. An iterative strategy was used to qualitatively explore the experiences of physicians with CDSSs; and to find out what features, functions, and evidence should be included in a working prototype. Focus groups were conducted using a semi-structured interview guide using an iterative process where results of the first focus group informed changes to the questions for subsequent focus groups and to the conceptual tool design. Transcripts were transcribed verbatim and analyzed using grounded theory methodology. Results Of the 3 broad categories of themes that were identified, major barriers related to the accuracy and feasibility of extracting bone mineral density test results and medications from the risk assessment questionnaire; using an electronic input device such as a Tablet PC in the waiting room; and the importance of including well-balanced information in the patient education

  19. Accuracy testing of electric groundwater-level measurement tapes

    USGS Publications Warehouse

    Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.

    2015-01-01

    The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).

  20. Diagnostic accuracy of a clinical diagnosis of idiopathic pulmonary fibrosis: an international case–cohort study

    PubMed Central

    Maher, Toby M.; Kolb, Martin; Poletti, Venerino; Nusser, Richard; Richeldi, Luca; Vancheri, Carlo; Wilsher, Margaret L.; Antoniou, Katerina M.; Behr, Jüergen; Bendstrup, Elisabeth; Brown, Kevin; Calandriello, Lucio; Corte, Tamera J.; Crestani, Bruno; Flaherty, Kevin; Glaspole, Ian; Grutters, Jan; Inoue, Yoshikazu; Kokosi, Maria; Kondoh, Yasuhiro; Kouranos, Vasileios; Kreuter, Michael; Johannson, Kerri; Judge, Eoin; Ley, Brett; Margaritopoulos, George; Martinez, Fernando J.; Molina-Molina, Maria; Morais, António; Nunes, Hilario; Raghu, Ganesh; Ryerson, Christopher J.; Selman, Moises; Spagnolo, Paolo; Taniguchi, Hiroyuki; Tomassetti, Sara; Valeyre, Dominique; Wijsenbeek, Marlies; Wuyts, Wim; Hansell, David; Wells, Athol

    2017-01-01

    We conducted an international study of idiopathic pulmonary fibrosis (IPF) diagnosis among a large group of physicians and compared their diagnostic performance to a panel of IPF experts. A total of 1141 respiratory physicians and 34 IPF experts participated. Participants evaluated 60 cases of interstitial lung disease (ILD) without interdisciplinary consultation. Diagnostic agreement was measured using the weighted kappa coefficient (κw). Prognostic discrimination between IPF and other ILDs was used to validate diagnostic accuracy for first-choice diagnoses of IPF and were compared using the C-index. A total of 404 physicians completed the study. Agreement for IPF diagnosis was higher among expert physicians (κw=0.65, IQR 0.53–0.72, p<0.0001) than academic physicians (κw=0.56, IQR 0.45–0.65, p<0.0001) or physicians with access to multidisciplinary team (MDT) meetings (κw=0.54, IQR 0.45–0.64, p<0.0001). The prognostic accuracy of academic physicians with >20 years of experience (C-index=0.72, IQR 0.0–0.73, p=0.229) and non-university hospital physicians with more than 20 years of experience, attending weekly MDT meetings (C-index=0.72, IQR 0.70–0.72, p=0.052), did not differ significantly (p=0.229 and p=0.052 respectively) from the expert panel (C-index=0.74 IQR 0.72–0.75). Experienced respiratory physicians at university-based institutions diagnose IPF with similar prognostic accuracy to IPF experts. Regular MDT meeting attendance improves the prognostic accuracy of experienced non-university practitioners to levels achieved by IPF experts. PMID:28860269

  1. Managing complex research datasets using electronic tools: A meta-analysis exemplar

    PubMed Central

    Brown, Sharon A.; Martin, Ellen E.; Garcia, Theresa J.; Winter, Mary A.; García, Alexandra A.; Brown, Adama; Cuevas, Heather E.; Sumlin, Lisa L.

    2013-01-01

    Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, e.g., EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process, as well as enhancing communication among research team members. The purpose of this paper is to describe the electronic processes we designed, using commercially available software, for an extensive quantitative model-testing meta-analysis we are conducting. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to: decide on which electronic tools to use, determine how these tools would be employed, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members. PMID:23681256

  2. In vivo study of flow-rate accuracy of the MedStream Programmable Infusion System.

    PubMed

    Venugopalan, Ramakrishna; Ginggen, Alec; Bork, Toralf; Anderson, William; Buffen, Elaine

    2011-01-01

      Flow-rate accuracy and precision are important parameters to optimizing the efficacy of programmable intrathecal (IT) infusion pump delivery systems. Current programmable IT pumps are accurate within ±14.5% of their programmed infusion rate when assessed under ideal environmental conditions and specific flow-rate settings in vitro. We assessed the flow-rate accuracy of a novel programmable pump system across its entire flow-rate range under typical conditions in sheep (in vivo) and nominal conditions in vitro.   The flow-rate accuracy of the MedStream Programmable Pump was assessed in both the in vivo and in vitro settings. In vivo flow-rate accuracy was assessed in 16 sheep at various flow-rates (producing 90 flow intervals) more than 90 ± 3 days. Pumps were then explanted, re-sterilized and in vitro flow-rate accuracy was assessed at 37°C and 1013 mBar (80 flow intervals).   In vivo (sheep body temperatures 38.1°C-39.8°C), mean ± SD flow-rate error was 9.32% ± 9.27% and mean ± SD leak-rate was 0.028 ± 0.08 mL/day. Following explantation, mean in vitro flow-rate error and leak-rate were -1.05% ± 2.55% and 0.003 ± 0.004 mL/day (37°C, 1013 mBar), respectively.   The MedStream Programmable Pump demonstrated high flow-rate accuracy when tested in vivo and in vitro at normal body temperature and environmental pressure as well as when tested in vivo at variable sheep body temperature. The flow-rate accuracy of the MedStream Programmable Pump across its flow-rate range, compares favorably to the accuracy of current clinically utilized programmable IT infusion pumps reported at specific flow-rate settings and conditions. © 2011 International Neuromodulation Society.

  3. Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.

    PubMed

    Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M

    2015-02-01

    Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors. Project HOPE—The People-to-People Health Foundation, Inc.

  4. Nanopore sequencing technology and tools for genome assembly: computational analysis of the current state, bottlenecks and future directions.

    PubMed

    Senol Cali, Damla; Kim, Jeremie S; Ghose, Saugata; Alkan, Can; Mutlu, Onur

    2018-04-02

    Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious

  5. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  6. Accuracy of Canadian health administrative databases in identifying patients with rheumatoid arthritis: a validation study using the medical records of rheumatologists.

    PubMed

    Widdifield, Jessica; Bernatsky, Sasha; Paterson, J Michael; Tu, Karen; Ng, Ryan; Thorne, J Carter; Pope, Janet E; Bombardier, Claire

    2013-10-01

    Health administrative data can be a valuable tool for disease surveillance and research. Few studies have rigorously evaluated the accuracy of administrative databases for identifying rheumatoid arthritis (RA) patients. Our aim was to validate administrative data algorithms to identify RA patients in Ontario, Canada. We performed a retrospective review of a random sample of 450 patients from 18 rheumatology clinics. Using rheumatologist-reported diagnosis as the reference standard, we tested and validated different combinations of physician billing, hospitalization, and pharmacy data. One hundred forty-nine rheumatology patients were classified as having RA and 301 were classified as not having RA based on our reference standard definition (study RA prevalence 33%). Overall, algorithms that included physician billings had excellent sensitivity (range 94-100%). Specificity and positive predictive value (PPV) were modest to excellent and increased when algorithms included multiple physician claims or specialist claims. The addition of RA medications did not significantly improve algorithm performance. The algorithm of "(1 hospitalization RA code ever) OR (3 physician RA diagnosis codes [claims] with ≥1 by a specialist in a 2-year period)" had a sensitivity of 97%, specificity of 85%, PPV of 76%, and negative predictive value of 98%. Most RA patients (84%) had an RA diagnosis code present in the administrative data within ±1 year of a rheumatologist's documented diagnosis date. We demonstrated that administrative data can be used to identify RA patients with a high degree of accuracy. RA diagnosis date and disease duration are fairly well estimated from administrative data in jurisdictions of universal health care insurance. Copyright © 2013 by the American College of Rheumatology.

  7. Digitizing the Facebow: A Clinician/Technician Communication Tool.

    PubMed

    Kalman, Les; Chrapka, Julia; Joseph, Yasmin

    2016-01-01

    Communication between the clinician and the technician has been an ongoing problem in dentistry. To improve the issue, a dental software application has been developed--the Virtual Facebow App. It is an alternative to the traditional analog facebow, used to orient the maxillary cast in mounting. Comparison data of the two methods indicated that the digitized virtual facebow provided increased efficiency in mounting, increased accuracy in occlusion, and lower cost. Occlusal accuracy, lab time, and total time were statistically significant (P<.05). The virtual facebow provides a novel alternative for cast mounting and another tool for clinician-technician communication.

  8. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  9. Simple Nutrition Screening Tool for Pediatric Inpatients.

    PubMed

    White, Melinda; Lawson, Karen; Ramsey, Rebecca; Dennis, Nicole; Hutchinson, Zoe; Soh, Xin Ying; Matsuyama, Misa; Doolan, Annabel; Todd, Alwyn; Elliott, Aoife; Bell, Kristie; Littlewood, Robyn

    2016-03-01

    Pediatric nutrition risk screening tools are not routinely implemented throughout many hospitals, despite prevalence studies demonstrating malnutrition is common in hospitalized children. Existing tools lack the simplicity of those used to assess nutrition risk in the adult population. This study reports the accuracy of a new, quick, and simple pediatric nutrition screening tool (PNST) designed to be used for pediatric inpatients. The pediatric Subjective Global Nutrition Assessment (SGNA) and anthropometric measures were used to develop and assess the validity of 4 simple nutrition screening questions comprising the PNST. Participants were pediatric inpatients in 2 tertiary pediatric hospitals and 1 regional hospital. Two affirmative answers to the PNST questions were found to maximize the specificity and sensitivity to the pediatric SGNA and body mass index (BMI) z scores for malnutrition in 295 patients. The PNST identified 37.6% of patients as being at nutrition risk, whereas the pediatric SGNA identified 34.2%. The sensitivity and specificity of the PNST compared with the pediatric SGNA were 77.8% and 82.1%, respectively. The sensitivity of the PNST at detecting patients with a BMI z score of less than -2 was 89.3%, and the specificity was 66.2%. Both the PNST and pediatric SGNA were relatively poor at detecting patients who were stunted or overweight, with the sensitivity and specificity being less than 69%. The PNST provides a sensitive, valid, and simpler alternative to existing pediatric nutrition screening tools such as Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP), Screening Tool Risk on Nutritional status and Growth (STRONGkids), and Paediatric Yorkhill Malnutrition Score (PYMS) to ensure the early detection of hospitalized children at nutrition risk. © 2014 American Society for Parenteral and Enteral Nutrition.

  10. Effects of Problem Solving after Worked Example Study on Secondary School Children's Monitoring Accuracy

    ERIC Educational Resources Information Center

    Baars, Martine; van Gog, Tamara; de Bruin, Anique; Paas, Fred

    2017-01-01

    Monitoring accuracy, measured by judgements of learning (JOLs), has generally been found to be low to moderate, with students often displaying overconfidence, and JOLs of problem solving are no exception. Recently, primary school children's overconfidence was shown to diminish when they practised problem solving after studying worked examples. The…

  11. Accuracy Evaluation of a CE-Marked Glucometer System for Self-Monitoring of Blood Glucose With Three Reagent Lots Following ISO 15197:2013.

    PubMed

    Hehmke, Bernd; Berg, Sabine; Salzsieder, Eckhard

    2017-05-01

    Continuous standardized verification of the accuracy of blood glucose meter systems for self-monitoring after their introduction into the market is an important clinically tool to assure reliable performance of subsequently released lots of strips. Moreover, such published verification studies permit comparison of different blood glucose monitoring systems and, thus, are increasingly involved in the process of evidence-based purchase decision making.

  12. Improved imputation accuracy in Hispanic/Latino populations with larger and more diverse reference panels: applications in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL)

    PubMed Central

    Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.

    2016-01-01

    Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520

  13. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  14. Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence

    PubMed Central

    Grayson, Lindsay M.; Keefe, Robert F.; Tinkham, Wade T.; Eitel, Jan U. H.; Saralecos, Jarred D.; Smith, Alistair M. S.; Zimbelman, Eloise G.

    2016-01-01

    Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10−8, and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts. PMID:27322287

  15. Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence.

    PubMed

    Grayson, Lindsay M; Keefe, Robert F; Tinkham, Wade T; Eitel, Jan U H; Saralecos, Jarred D; Smith, Alistair M S; Zimbelman, Eloise G

    2016-06-18

    Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10(-8), and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts.

  16. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  17. Detecting Diseases in Medical Prescriptions Using Data Mining Tools and Combining Techniques.

    PubMed

    Teimouri, Mehdi; Farzadfar, Farshad; Soudi Alamdari, Mahsa; Hashemi-Meshkini, Amir; Adibi Alamdari, Parisa; Rezaei-Darzi, Ehsan; Varmaghani, Mehdi; Zeynalabedini, Aysan

    2016-01-01

    Data about the prevalence of communicable and non-communicable diseases, as one of the most important categories of epidemiological data, is used for interpreting health status of communities. This study aims to calculate the prevalence of outpatient diseases through the characterization of outpatient prescriptions. The data used in this study is collected from 1412 prescriptions for various types of diseases from which we have focused on the identification of ten diseases. In this study, data mining tools are used to identify diseases for which prescriptions are written. In order to evaluate the performances of these methods, we compare the results with Naïve method. Then, combining methods are used to improve the results. Results showed that Support Vector Machine, with an accuracy of 95.32%, shows better performance than the other methods. The result of Naive method, with an accuracy of 67.71%, is 20% worse than Nearest Neighbor method which has the lowest level of accuracy among the other classification algorithms. The results indicate that the implementation of data mining algorithms resulted in a good performance in characterization of outpatient diseases. These results can help to choose appropriate methods for the classification of prescriptions in larger scales.

  18. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  19. The methodological quality of three foundational law enforcement drug influence evaluation validation studies

    PubMed Central

    2013-01-01

    Background A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect’s driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Results Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. Conclusion The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice. PMID:24188398

  20. Measuring Sleep: Accuracy, Sensitivity, and Specificity of Wrist Actigraphy Compared to Polysomnography

    PubMed Central

    Marino, Miguel; Li, Yi; Rueschman, Michael N.; Winkelman, J. W.; Ellenbogen, J. M.; Solet, J. M.; Dulin, Hilary; Berkman, Lisa F.; Buxton, Orfeu M.

    2013-01-01

    Objectives: We validated actigraphy for detecting sleep and wakefulness versus polysomnography (PSG). Design: Actigraphy and polysomnography were simultaneously collected during sleep laboratory admissions. All studies involved 8.5 h time in bed, except for sleep restriction studies. Epochs (30-sec; n = 232,849) were characterized for sensitivity (actigraphy = sleep when PSG = sleep), specificity (actigraphy = wake when PSG = wake), and accuracy (total proportion correct); the amount of wakefulness after sleep onset (WASO) was also assessed. A generalized estimating equation (GEE) model included age, gender, insomnia diagnosis, and daytime/nighttime sleep timing factors. Setting: Controlled sleep laboratory conditions. Participants: Young and older adults, healthy or chronic primary insomniac (PI) patients, and daytime sleep of 23 night-workers (n = 77, age 35.0 ± 12.5, 30F, mean nights = 3.2). Interventions: N/A. Measurements and Results: Overall, sensitivity (0.965) and accuracy (0.863) were high, whereas specificity (0.329) was low; each was only slightly modified by gender, insomnia, day/night sleep timing (magnitude of change < 0.04). Increasing age slightly reduced specificity. Mean WASO/night was 49.1 min by PSG compared to 36.8 min/night by actigraphy (β = 0.81; CI = 0.42, 1.21), unbiased when WASO < 30 min/night, and overestimated when WASO > 30 min/night. Conclusions: This validation quantifies strengths and weaknesses of actigraphy as a tool measuring sleep in clinical and population studies. Overall, the participant-specific accuracy is relatively high, and for most participants, above 80%. We validate this finding across multiple nights and a variety of adults across much of the young to midlife years, in both men and women, in those with and without insomnia, and in 77 participants. We conclude that actigraphy is overall a useful and valid means for estimating total sleep time and wakefulness after sleep onset in field and workplace studies, with

  1. The accuracy of pulse oximetry in emergency department patients with severe sepsis and septic shock: a retrospective cohort study.

    PubMed

    Wilson, Ben J; Cowan, Hamish J; Lord, Jason A; Zuege, Dan J; Zygun, David A

    2010-05-05

    Pulse oximetry is routinely used to continuously and noninvasively monitor arterial oxygen saturation (SaO2) in critically ill patients. Although pulse oximeter oxygen saturation (SpO2) has been studied in several patient populations, including the critically ill, its accuracy has never been studied in emergency department (ED) patients with severe sepsis and septic shock. Sepsis results in characteristic microcirculatory derangements that could theoretically affect pulse oximeter accuracy. The purposes of the present study were twofold: 1) to determine the accuracy of pulse oximetry relative to SaO2 obtained from ABG in ED patients with severe sepsis and septic shock, and 2) to assess the impact of specific physiologic factors on this accuracy. This analysis consisted of a retrospective cohort of 88 consecutive ED patients with severe sepsis who had a simultaneous arterial blood gas and an SpO2 value recorded. Adult ICU patients that were admitted from any Calgary Health Region adult ED with a pre-specified, sepsis-related admission diagnosis between October 1, 2005 and September 30, 2006, were identified. Accuracy (SpO2 - SaO2) was analyzed by the method of Bland and Altman. The effects of hypoxemia, acidosis, hyperlactatemia, anemia, and the use of vasoactive drugs on bias were determined. The cohort consisted of 88 subjects, with a mean age of 57 years (19 - 89). The mean difference (SpO2 - SaO2) was 2.75% and the standard deviation of the differences was 3.1%. Subgroup analysis demonstrated that hypoxemia (SaO2 < 90) significantly affected pulse oximeter accuracy. The mean difference was 4.9% in hypoxemic patients and 1.89% in non-hypoxemic patients (p < 0.004). In 50% (11/22) of cases in which SpO2 was in the 90-93% range the SaO2 was <90%. Though pulse oximeter accuracy was not affected by acidoisis, hyperlactatementa, anemia or vasoactive drugs, these factors worsened precision. Pulse oximetry overestimates ABG-determined SaO2 by a mean of 2.75% in

  2. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  3. Performance characteristics of five triage tools for major incidents involving traumatic injuries to children.

    PubMed

    Price, C L; Brace-McDonnell, S J; Stallard, N; Bleetman, A; Maconochie, I; Perkins, G D

    2016-05-01

    Context Triage tools are an essential component of the emergency response to a major incident. Although fortunately rare, mass casualty incidents involving children are possible which mandate reliable triage tools to determine the priority of treatment. To determine the performance characteristics of five major incident triage tools amongst paediatric casualties who have sustained traumatic injuries. Retrospective observational cohort study using data from 31,292 patients aged less than 16 years who sustained a traumatic injury. Data were obtained from the UK Trauma Audit and Research Network (TARN) database. Interventions Statistical evaluation of five triage tools (JumpSTART, START, CareFlight, Paediatric Triage Tape/Sieve and Triage Sort) to predict death or severe traumatic injury (injury severity score >15). Main outcome measures Performance characteristics of triage tools (sensitivity, specificity and level of agreement between triage tools) to identify patients at high risk of death or severe injury. Of the 31,292 cases, 1029 died (3.3%), 6842 (21.9%) had major trauma (defined by an injury severity score >15) and 14,711 (47%) were aged 8 years or younger. There was variation in the performance accuracy of the tools to predict major trauma or death (sensitivities ranging between 36.4 and 96.2%; specificities 66.0-89.8%). Performance characteristics varied with the age of the child. CareFlight had the best overall performance at predicting death, with the following sensitivity and specificity (95% CI) respectively: 95.3% (93.8-96.8) and 80.4% (80.0-80.9). JumpSTART was superior for the triaging of children under 8 years; sensitivity and specificity (95% CI) respectively: 86.3% (83.1-89.5) and 84.8% (84.2-85.5). The triage tools were generally better at identifying patients who would die than those with non-fatal severe injury. This statistical evaluation has demonstrated variability in the accuracy of triage tools at predicting outcomes for children who

  4. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  5. A HTML5 open source tool to conduct studies based on Libet’s clock paradigm

    PubMed Central

    Garaizar, Pablo; Cubillas, Carmelo P.; Matute, Helena

    2016-01-01

    Libet’s clock is a well-known procedure in experiments in psychology and neuroscience. Examples of its use include experiments exploring the subjective sense of agency, action-effect binding, and subjective timing of conscious decisions and perceptions. However, the technical details of the apparatus used to conduct these types of experiments are complex, and are rarely explained in sufficient detail as to guarantee an exact replication of the procedure. With this in mind, we developed Labclock Web, a web tool designed to conduct online and offline experiments using Libet’s clock. After describing its technical features, we explain how to configure specific experiments using this tool. Its degree of accuracy and precision in the presentation of stimuli has been technically validated, including the use of two cognitive experiments conducted with voluntary participants who performed the experiment both in our laboratory and via the Internet. Labclock Web is distributed without charge under a free software license (GPLv3) since one of our main objectives is to facilitate the replication of experiments and hence the advancement of knowledge in this area. PMID:27623167

  6. Diagnostic accuracy and measurement sensitivity of digital models for orthodontic purposes: A systematic review.

    PubMed

    Rossini, Gabriele; Parrini, Simone; Castroflorio, Tommaso; Deregibus, Andrea; Debernardi, Cesare L

    2016-02-01

    Our objective was to assess the accuracy, validity, and reliability of measurements obtained from virtual dental study models compared with those obtained from plaster models. PubMed, PubMed Central, National Library of Medicine Medline, Embase, Cochrane Central Register of Controlled Clinical trials, Web of Knowledge, Scopus, Google Scholar, and LILACs were searched from January 2000 to November 2014. A grading system described by the Swedish Council on Technology Assessment in Health Care and the Cochrane tool for risk of bias assessment were used to rate the methodologic quality of the articles. Thirty-five relevant articles were selected. The methodologic quality was high. No significant differences were observed for most of the studies in all the measured parameters, with the exception of the American Board of Orthodontics Objective Grading System. Digital models are as reliable as traditional plaster models, with high accuracy, reliability, and reproducibility. Landmark identification, rather than the measuring device or the software, appears to be the greatest limitation. Furthermore, with their advantages in terms of cost, time, and space required, digital models could be considered the new gold standard in current practice. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. Point-of-care wound visioning technology: Reproducibility and accuracy of a wound measurement app

    PubMed Central

    Anderson, John A. E.; Evans, Robyn; Woo, Kevin; Beland, Benjamin; Sasseville, Denis; Moreau, Linda

    2017-01-01

    Background Current wound assessment practices are lacking on several measures. For example, the most common method for measuring wound size is using a ruler, which has been demonstrated to be crude and inaccurate. An increase in periwound temperature is a classic sign of infection but skin temperature is not always measured during wound assessments. To address this, we have developed a smartphone application that enables non-contact wound surface area and temperature measurements. Here we evaluate the inter-rater reliability and accuracy of this novel point-of-care wound assessment tool. Methods and findings The wounds of 87 patients were measured using the Swift Wound app and a ruler. The skin surface temperature of 37 patients was also measured using an infrared FLIR™ camera integrated with the Swift Wound app and using the clinically accepted reference thermometer Exergen DermaTemp 1001. Accuracy measurements were determined by assessing differences in surface area measurements of 15 plastic wounds between a digital planimeter of known accuracy and the Swift Wound app. To evaluate the impact of training on the reproducibility of the Swift Wound app measurements, three novice raters with no wound care training, measured the length, width and area of 12 plastic model wounds using the app. High inter-rater reliabilities (ICC = 0.97–1.00) and high accuracies were obtained using the Swift Wound app across raters of different levels of training in wound care. The ruler method also yielded reliable wound measurements (ICC = 0.92–0.97), albeit lower than that of the Swift Wound app. Furthermore, there was no statistical difference between the temperature differences measured using the infrared camera and the clinically tested reference thermometer. Conclusions The Swift Wound app provides highly reliable and accurate wound measurements. The FLIR™ infrared camera integrated into the Swift Wound app provides skin temperature readings equivalent to the clinically

  8. Back to Anatomy: Improving Landmarking Accuracy of Clinical Procedures Using a Novel Approach to Procedural Teaching.

    PubMed

    Zeller, Michelle; Cristancho, Sayra; Mangel, Joy; Goldszmidt, Mark

    2015-06-01

    Many believe that knowledge of anatomy is essential for performing clinical procedures; however, unlike their surgical counterparts, internal medicine (IM) programs rarely incorporate anatomy review into procedural teaching. This study tested the hypothesis that an educational intervention focused on teaching relevant surface and underlying anatomy would result in improved bone marrow procedure landmarking accuracy. This was a preintervention-postintervention prospective study on landmarking accuracy of consenting IM residents attending their mandatory academic half-day. The intervention included an interactive video and visualization exercise; the video was developed specifically to teach the relevant underlying anatomy and includes views of live volunteers, cadavers, and skeletons. Thirty-one IM residents participated. At pretest, 48% (15/31) of residents landmarked accurately. Inaccuracy of pretest landmarking varied widely (n = 16, mean 20.06 mm; standard deviation 30.03 mm). At posttest, 74% (23/31) of residents accurately performed the procedure. McNemar test revealed a nonsignificant trend toward increased performance at posttest (P = 0.076; unadjusted odds for discordant pairs 3; 95% confidence interval 0.97-9.3). The Wilcoxon signed rank test demonstrated a significant difference between pre- and posttest accuracy in the 16 residents who were inaccurate at pretest (P = 0.004). No association was detected between participant baseline characteristics and pretest accuracy. This study demonstrates that residents who were initially inaccurate were able to significantly improve their landmarking skills by interacting with an educational tool emphasizing the relation between the surface and underlying anatomy. Our results support the use of basic anatomy in teaching bone marrow procedures. Results also support the proper use of video as an effective means for incorporating anatomy teaching around procedural skills.

  9. A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services

    NASA Astrophysics Data System (ADS)

    Malinowski, Marcin; Kwiecień, Janusz

    2016-12-01

    Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.

  10. Confirmatory Tests for the Diagnosis of Primary Aldosteronism: A Prospective Diagnostic Accuracy Study.

    PubMed

    Song, Ying; Yang, Shumin; He, Wenwen; Hu, Jinbo; Cheng, Qingfeng; Wang, Yue; Luo, Ting; Ma, Linqiang; Zhen, Qianna; Zhang, Suhua; Mei, Mei; Wang, Zhihong; Qing, Hua; Bruemmer, Dennis; Peng, Bin; Li, Qifu

    2018-01-01

    The diagnosis of primary aldosteronism typically requires at least one confirmatory test. The fludrocortisone suppression test is generally accepted as a reliable confirmatory test, but it is cumbersome. Evidence from accuracy studies of the saline infusion test (SIT) and the captopril challenge test (CCT) has provided conflicting results. This prospective study aimed to evaluate the diagnostic accuracy of the SIT and CCT using fludrocortisone suppression test as the reference standard. One hundred thirty-five patients diagnosed with primary aldosteronism and 101 patients diagnosed with essential hypertension who completed the 3 confirmatory tests were included for the diagnostic accuracy analysis. The areas under the receiver-operator characteristics curves of the CCT and SIT were 0.96 (95% confidence interval [CI], 0.92-0.98) and 0.96 (95% CI, 0.92-0.98), respectively, using post-test plasma aldosterone concentration (PAC) for diagnosis. However, the areas under the receiver-operator characteristics curves of the CCT decreased to 0.71 (95% CI, 0.65-0.77) when the PAC suppression percentage was used to diagnose primary aldosteronism. The optimal cutoff of PAC post-CCT was set at 11 ng/dL, resulting in a sensitivity of 0.90 (95% CI, 0.84-0.95) and a specificity of 0.90 (95% CI, 0.83-0.95), which were not significantly different from those of SIT (with PAC post-SIT set at 8 ng/dL, sensitivity: 0.85 [95% CI, 0.78-0.91], P =0.192; specificity: 0.92 [95% CI, 0.85-0.97], P =0.551). In conclusion, both CCT and SIT are accurate alternatives to the more complex fludrocortisone suppression test. Because CCT is safe and much easier to perform, it may serve as a more feasible alternative. When interpreting the results of CCT, PAC post-CCT is highly recommended. © 2017 American Heart Association, Inc.

  11. A reference dataset for deformable image registration spatial accuracy evaluation using the COPDgene study archive

    NASA Astrophysics Data System (ADS)

    Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas

    2013-05-01

    Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.

  12. The validity and accuracy of MRI arthrogram in the assessment of painful articular disorders of the hip.

    PubMed

    Rajeev, Aysha; Tuinebreijer, Wim; Mohamed, Abdalla; Newby, Mike

    2018-01-01

    The assessment of a patient with chronic hip pain can be challenging. The differential diagnosis of intra-articular pathology causing hip pain can be diverse. These includes conditions such as osteoarthritis, fracture, and avascular necrosis, synovitis, loose bodies, labral tears, articular pathology and, femoro-acetabular impingement. Magnetic resonance imaging (MRI) arthrography of the hip has been widely used now for diagnosis of articular pathology of the hip. A retrospective analysis of 113 patients who had MRI arthrogram and who underwent hip arthroscopy was included in the study. The MRI arthrogram was performed using gadolinium injection and reported by a single radiologist. The findings were then compared to that found on arthroscopy. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy and 95% confidence interval were calculated for each pathology. Labral tear-sensitivity 84% (74.3-90.5), specificity 64% (40.7-82.8), PPV 91% (82.1-95.8), NPV 48% (29.5-67.5), accuracy 80%. Delamination -sensitivity 7% (0.8-22.1), specificity 98% (91.6-99.7), PPV 50% (6.8-93.2), NPV 74% (65.1-82.2) and accuracy 39%. Chondral changes-sensitivity 25% (13.3-38.9), specificity 83% (71.3-91.1), PPV 52% (30.6-73.2), NPV 59% (48.0-69.2) and accuracy 58%. Femoro-acetabular impingement (CAM deformity)-sensitivity 34% (19.6-51.4), specificity 83% (72.2-90.4), PPV 50% (29.9-70.1), NPV 71% (60.6-80.5) and accuracy 66%. Synovitis-sensitivity 11% (2.3-28.2), specificity 99% (93.6-100), PPV 75% (19.4-99.4), NPV 77% (68.1-84.6) and accuracy 77%. Our study conclusions are MRI arthrogram is a useful investigation tool in detecting labral tears, it is also helpful in the diagnosis of femoro-acetabular impingement. However, when it comes to the diagnosis of chondral changes, defects and cartilage delamination, the sensitivity and accuracy are low.

  13. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  14. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  15. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, M. H.; Giebel, G.; Nielsen, T. S.; Hahmann, A.; Sørensen, P.; Madsen, H.

    2012-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the working title "Integrated Wind Power Planning Tool". The project commenced October 1, 2011, and the goal is to integrate a numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. With regard to the latter, one such simulation tool has been developed at the Wind Energy Division, Risø DTU, intended for long term power system planning. As part of the PSO project the inferior NWP model used at present will be replaced by the state-of-the-art Weather Research & Forecasting (WRF) model. Furthermore, the integrated simulation tool will be improved so it can handle simultaneously 10-50 times more turbines than the present ~ 300, as well as additional atmospheric parameters will be included in the model. The WRF data will also be input for a statistical short term prediction model to be developed in collaboration with ENFOR A/S; a danish company that specialises in forecasting and optimisation for the energy sector. This integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated prediction tool constitute scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish

  16. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  17. The diagnostic accuracy of the MyDiagnostick to detect atrial fibrillation in primary care

    PubMed Central

    2014-01-01

    Background Atrial fibrillation is very common in people aged 65 or older. This condition increases the risk of death, congestive heart failure and thromboembolic conditions. Many patients with atrial fibrillation are asymptomatic and a cerebrovascular accident (CVA) is often the first clinical presentation. Guidelines concerning the prevention of CVA recommend monitoring the heart rate in patients aged 65 or older. Recently, the MyDiagnostick (Applied Biomedical Systems BV, Maastricht, The Netherlands) was introduced as a new screening tool which might serve as an alternative for the less accurate pulse palpation. This study was designed to explore the diagnostic accuracy of the MyDiagnostick for the detection of atrial fibrillation. Methods A phase II diagnostic accuracy study in a convenience sample of 191 subjects recruited in primary care. The majority of participants were patients with a known history of atrial fibrillation (n = 161). Readings of the MyDiagnostick were compared with electrocardiographic recordings. Sensitivity and specificity and their 95% confidence interval were calculated using 2x2 tables. Results A prevalence of 54% for an atrial fibrillation rhythm was found in the study population at the moment of the study. A combination of three measurements with the MyDiagnostick for each patient showed a sensitivity of 94% (95% CI 87 – 98) and a specificity of 93% (95% CI 85 – 97). Conclusion The MyDiagnostick is an easy-to-use device that showed a good diagnostic accuracy with a high sensitivity and specificity for atrial fibrillation in a convenience sample in primary care. Future research is needed to determine the place of the MyDiagnostick in possible screening or case-finding strategies for atrial fibrillation. PMID:24913608

  18. Accuracy of the Broselow Tape in South Sudan, "The Hungriest Place on Earth".

    PubMed

    Clark, Melissa C; Lewis, Roger J; Fleischman, Ross J; Ogunniyi, Adedamola A; Patel, Dipesh S; Donaldson, Ross I

    2016-01-01

    The Broselow tape is a length-based tool used for the rapid estimation of pediatric weight and was developed to reduce dosage-related errors during emergencies. This study seeks to assess the accuracy of the Broselow tape and age-based formulas in predicting weights of South Sudanese children of varying nutritional status. This was a retrospective, cross-sectional study using data from existing acute malnutrition screening programs for children less than 5 years of age in South Sudan. Using anthropometric measurements, actual weights were compared with estimated weights from the Broselow tape and three age-based formulas. Mid-upper arm circumference was used to determine if each child was malnourished. Broselow accuracy was assessed by the percentage of measured weights falling into the same color zone as the predicted weight. For each method, accuracy was assessed by mean percentage error and percentage of predicted weights falling within 10% of actual weight. All data were analyzed by nutritional status subgroup. Only 10.7% of malnourished and 26.6% of nonmalnourished children had their actual weight fall within the Broselow color zone corresponding to their length. The Broselow method overestimated weight by a mean of 26.6% in malnourished children and 16.6% in nonmalnourished children (p < 0.001). Age-based formulas also overestimated weight, with mean errors ranging from 16.2% over actual weight (Advanced Pediatric Life Support in nonmalnourished children) to 70.9% over actual (Best Guess in severely malnourished children). The Broselow tape and age-based formulas selected for comparison were all markedly inaccurate in both the nonmalnourished and the malnourished populations studied, worsening with increasing malnourishment. Additional studies should explore appropriate methods of weight and dosage estimation for populations of low- and low-to-middle-income countries and regions with a high prevalence of malnutrition. © 2015 by the Society for Academic

  19. Diagnostic accuracy of lymphoma established by fine-needle aspiration cytological biopsy

    NASA Astrophysics Data System (ADS)

    Delyuzar; Amir, Z.; Suryadi, D.

    2018-03-01

    Based on Globocan data in 2012, it is estimated that about 14,495 Indonesians suffer from lymphoma, both Hodgkin’s lymphoma, and non-Hodgkin’s lymphoma. Some areas of specialization still doubt the accuracy of cytology diagnosis of fine needle aspiration biopsy.This study is a diagnostic test with a cross sectional analytic design to see how the cytology diagnostic accuracy of fine needle aspiration aspirate in lymphoma. It was in Department of Anatomical Pathology Faculty of Medicine USU, Haji Adam Malik Hospital, Dr.Pirngadi hospital, or private clinic in Medan. Peripheral cytology technique biopsy of fine needle aspiration on lymph node subsequently stained with Giemsa, when the cytology of lymphoma is obtained and confirmed by histopathologic examination. Cytology and histopathologic examination will be tested by Diagnostic Test and assessed for its sensitivity and specificity. The diagnostic of lymphoma cytology provides 93.33% sensitivity and 92.31% specificity when confirmed by histopathological examination. Positive predictive value and negative predictive value of 96.55% and 85.71% respectively. In conclusion, the cytology of fine needle aspiration biopsy is accurate enough to be used as a diagnostic tool, so it is advisable to establish a lymphoma diagnosis to perform a needle aspiration biopsy examination.

  20. Accuracy of implant surgery with surgical guide by inexperienced clinicians: an in vitro study

    PubMed Central

    Tanaka, Hideaki; Sasaki, Masanori; Ichimaru, Eiji; Naito, Yasushi; Matsushita, Yasuyuki; Koyano, Kiyoshi; Nakamura, Seiji

    2015-01-01

    Abstract Implant surgery with surgical guide has been introduced with a concept of position improvement. The surgery might be considered as easy even for inexperienced clinician because of step simplicity. However, there are residual risks, resulting in postoperative complications. The aim of this study was to assess the accuracy of implant surgery with surgical guide by inexperienced clinicians in in vitro. After preoperative computed tomographies (CTs) of five artificial models of unilateral free‐end edentulism with scan templates, five surgical guides were established from templates. Following virtual planning, 10 implants were placed in the 45 and 47 regions by five residents without any placement experiences. All drillings and placements were performed using surgical guides. After postoperative CTs, inaccurate verifications between virtual and actual positions of implants were carried out, by overlaying of pre/postoperative CT data. The angle displacement of implant axis in the 47 region was significantly larger than that in the 45 region (P = 0.031). The 3D offset of implant base in the 47 region was significantly larger than that in the 45 region (P = 0.002). For distal/apical directions, displacements of base in the 47 region were significantly larger than those in the 45 region (P = 0.004 and P = 0.003, respectively). The 3D offset of implant tip in the 47 region was significantly larger than that in the 45 region (P = 0.003). For distal/apical directions, displacements of tip in the 47 region were significantly larger than those in the 45 region (P = 0.002 and P = 0.003, respectively). Within limitations of this in vitro study, data for accuracy of implant surgery with surgical guide would be informative for further studies, because in vitro studies should be substantially made to avoid unnecessary burden of patients, in advance of retro/prospective studies. A comparison of the accuracy in this in vitro model between by

  1. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., measurement accuracy, and cut-off. 53.53 Section 53.53 Protection of Environment ENVIRONMENTAL PROTECTION..., measurement accuracy, and cut-off. (a) Overview. This test procedure is designed to evaluate a candidate... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The...

  2. SU-F-J-160: Clinical Evaluation of Targeting Accuracy in Radiosurgery Using Tractography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juh, R; Han, J; Kim, C

    Purpose: Focal radiosurgery is a common treatment modality for trigeminal neuralgia (TN), a neuropathic facial pain condition. Assessment of treatment effectiveness is primarily clinical, given the paucity of investigational tools to assess trigeminal nerve changes. The efficiency of radiosurgery is related to its highly precise targeting. We assessed clinically the targeting accuracy of radiosurgery with Gamma knife. We hypothesized that trigeminal tractography provides more information than 2D-MR imaging, allowing detection of unique, focal changes in the target area after radiosurgery. Methods: Sixteen TN patients (2 females, 4 males, average age 65.3 years) treated with Gamma Knife radiosurgery, 40 Gy/50% isodosemore » line underwent 1.5Tesla MR trigeminal nerve. Target accuracy was assessed from deviation of the coordinates of the target compared with the center of enhancement on post MRI. Radiation dose delivered at the borders of contrast enhancement was evaluated. Results: The median deviation of the coordinates between the intended target and the center of contrast enhancement was within 1mm. The radiation doses fitting within the borders of the contrast enhancement the target ranged from 37.5 to 40 Gy. Trigeminal tractography accurately detected the radiosurgical target. Radiosurgery resulted in 47% drop in FA values at the target with no significant change in FA outside the target, suggesting that radiosurgery primarily affects myelin. Tractography was more sensitive, since FA changes were detected regardless of trigeminal nerve enhancement. Conclusion: The median deviation found in clinical assessment of gamma knife treatment for TN Is low and compatible with its high rate of efficiency. DTI parameters accurately detect the effects of focal radiosurgery on the trigeminal nerve, serving as an in vivo imaging tool to study TN. This study is a proof of principle for further assessment of DTI parameters to understand the pathophysiology of TN and

  3. Investigating the spatial accuracy of CBCT-guided cranial radiosurgery: A phantom end-to-end test study.

    PubMed

    Calvo-Ortega, Juan-Francisco; Hermida-López, Marcelino; Moragues-Femenía, Sandra; Pozo-Massó, Miquel; Casals-Farran, Joan

    2017-03-01

    To evaluate the spatial accuracy of a frameless cone-beam computed tomography (CBCT)-guided cranial radiosurgery (SRS) using an end-to-end (E2E) phantom test methodology. Five clinical SRS plans were mapped to an acrylic phantom containing a radiochromic film. The resulting phantom-based plans (E2E plans) were delivered four times. The phantom was setup on the treatment table with intentional misalignments, and CBCT-imaging was used to align it prior to E2E plan delivery. Comparisons (global gamma analysis) of the planned and delivered dose to the film were performed using a commercial triple-channel film dosimetry software. The necessary distance-to-agreement to achieve a 95% (DTA95) gamma passing rate for a fixed 3% dose difference provided an estimate of the spatial accuracy of CBCT-guided SRS. Systematic (∑) and random (σ) error components, as well as 95% confidence levels were derived for the DTA95 metric. The overall systematic spatial accuracy averaged over all tests was 1.4mm (SD: 0.2mm), with a corresponding 95% confidence level of 1.8mm. The systematic (Σ) and random (σ) spatial components of the accuracy derived from the E2E tests were 0.2mm and 0.8mm, respectively. The E2E methodology used in this study allowed an estimation of the spatial accuracy of our CBCT-guided SRS procedure. Subsequently, a PTV margin of 2.0mm is currently used in our department. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.

    PubMed

    Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P

    2016-03-01

    The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  5. Social Networking Tools and Teacher Education Learning Communities: A Case Study

    ERIC Educational Resources Information Center

    Poulin, Michael T.

    2014-01-01

    Social networking tools have become an integral part of a pre-service teacher's educational experience. As a result, the educational value of social networking tools in teacher preparation programs must be examined. The specific problem addressed in this study is that the role of social networking tools in teacher education learning communities…

  6. BioLemmatizer: a lemmatization tool for morphological processing of biomedical text

    PubMed Central

    2012-01-01

    Background The wide variety of morphological variants of domain-specific technical terms contributes to the complexity of performing natural language processing of the scientific literature related to molecular biology. For morphological analysis of these texts, lemmatization has been actively applied in the recent biomedical research. Results In this work, we developed a domain-specific lemmatization tool, BioLemmatizer, for the morphological analysis of biomedical literature. The tool focuses on the inflectional morphology of English and is based on the general English lemmatization tool MorphAdorner. The BioLemmatizer is further tailored to the biological domain through incorporation of several published lexical resources. It retrieves lemmas based on the use of a word lexicon, and defines a set of rules that transform a word to a lemma if it is not encountered in the lexicon. An innovative aspect of the BioLemmatizer is the use of a hierarchical strategy for searching the lexicon, which enables the discovery of the correct lemma even if the input Part-of-Speech information is inaccurate. The BioLemmatizer achieves an accuracy of 97.5% in lemmatizing an evaluation set prepared from the CRAFT corpus, a collection of full-text biomedical articles, and an accuracy of 97.6% on the LLL05 corpus. The contribution of the BioLemmatizer to accuracy improvement of a practical information extraction task is further demonstrated when it is used as a component in a biomedical text mining system. Conclusions The BioLemmatizer outperforms other tools when compared with eight existing lemmatizers. The BioLemmatizer is released as an open source software and can be downloaded from http://biolemmatizer.sourceforge.net. PMID:22464129

  7. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  8. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  9. Dietary Adherence Monitoring Tool for Free-living, Controlled Feeding Studies

    USDA-ARS?s Scientific Manuscript database

    Objective: To devise a dietary adherence monitoring tool for use in controlled human feeding trials involving free-living study participants. Methods: A scoring tool was devised to measure and track dietary adherence for an 8-wk randomized trial evaluating the effects of two different dietary patter...

  10. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    NASA Astrophysics Data System (ADS)

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-05-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

  11. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  12. Development of Grammatical Accuracy in English-Speaking Children with Cochlear Implants: A Longitudinal Study

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; Spencer, Linda J.

    2017-01-01

    Purpose: We sought to evaluate the development of grammatical accuracy in English-speaking children with cochlear implants (CIs) over a 3-year span. Method: Ten children who received CIs before age 30 months participated in this study at 3, 4, and 5 years postimplantation. For the purpose of comparison, 10 children each at ages 3, 4, and 5 years…

  13. Photogrammetric accuracy measurements of head holder systems used for fractionated radiotherapy.

    PubMed

    Menke, M; Hirschfeld, F; Mack, T; Pastyr, O; Sturm, V; Schlegel, W

    1994-07-30

    We describe how stereo photogrammetry can be used to determine immobilization and repositioning accuracies of head holder systems used for fractionated radiotherapy of intracranial lesions. The apparatus consists of two video cameras controlled by a personal computer and a bite block based landmark system. Position and spatial orientation of the landmarks are monitored by the cameras and processed for the real-time calculation of a target point's actual position relative to its initializing position. The target's position is assumed to be invariant with respect to the landmark system. We performed two series of 30 correlated head motion measurements on two test persons. One of the series was done with a thermoplastic device, the other one with a cast device developed for stereotactic treatment at the German Cancer Research Center. Immobilization and repositioning accuracies were determined with respect to a target point situated near the base of the skull. The repositioning accuracies were described in terms of the distributions of the mean displacements of the single motion measurements. Movements of the target in the order of 0.05 mm caused by breathing could be detected with a maximum resolution in time of 12 ms. The data derived from the investigation of the two test persons indicated similar immobilization accuracies for the two devices, but the repositioning errors were larger for the thermoplastic device than for the cast device. Apart from this, we found that for the thermoplastic mask the lateral repositioning error depended on the order in which the mask was closed. The photogrammetric apparatus is a versatile tool for accuracy measurements of head holder devices used for fractionated radiotherapy.

  14. SU-E-E-02: An Excel-Based Study Tool for ABR-Style Exams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cline, K; Stanley, D; Defoor, D

    2015-06-15

    Purpose: As the landscape of learning and testing shifts toward a computer-based environment, a replacement for paper-based methods of studying is desirable. Using Microsoft Excel, a study tool was developed that allows the user to populate multiple-choice questions and then generate an interactive quiz session to answer them. Methods: The code for the tool was written using Microsoft Excel Visual Basic for Applications with the intent that this tool could be implemented by any institution with Excel. The base tool is a template with a setup macro, which builds out the structure based on user’s input. Once the framework ismore » built, the user can input sets of multiple-choice questions, answer choices, and even add figures. The tool can be run in random-question or sequential-question mode for single or multiple courses of study. The interactive session allows the user to select answer choices and immediate feedback is provided. Once the user is finished studying, the tool records the day’s progress by reporting progress statistics useful for trending. Results: Six doctoral students at UTHSCSA have used this tool for the past two months to study for their qualifying exam, which is similar in format and content to the American Board of Radiology (ABR) Therapeutic Part II exam. The students collaborated to create a repository of questions, met weekly to go over these questions, and then used the tool to prepare for their exam. Conclusion: The study tool has provided an effective and efficient way for students to collaborate and be held accountable for exam preparation. The ease of use and familiarity of Excel are important factors for the tool’s use. There are software packages to create similar question banks, but this study tool has no additional cost for those that already have Excel. The study tool will be made openly available.« less

  15. The remote sensing of aquatic macrophytes Part 1: Color-infrared aerial photography as a tool for identification and mapping of littoral vegetation. Part 2: Aerial photography as a quantitative tool for the investigation of aquatic ecosystems. [Lake Wingra, Wisconsin

    NASA Technical Reports Server (NTRS)

    Gustafson, T. D.; Adams, M. S.

    1973-01-01

    Research was initiated to use aerial photography as an investigative tool in studies that are part of an intensive aquatic ecosystem research effort at Lake Wingra, Madison, Wisconsin. It is anticipated that photographic techniques would supply information about the growth and distribution of littoral macrophytes with efficiency and accuracy greater than conventional methods.

  16. English Verb Accuracy of Bilingual Cantonese-English Preschoolers.

    PubMed

    Rezzonico, Stefano; Goldberg, Ahuva; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi

    2017-07-26

    Knowledge of verb development in typically developing bilingual preschoolers may inform clinicians about verb accuracy rates during the 1st 2 years of English instruction. This study aimed to investigate tensed verb accuracy in 2 assessment contexts in 4- and 5-year-old Cantonese-English bilingual preschoolers. The sample included 47 Cantonese-English bilinguals enrolled in English preschools. Half of the children were in their 1st 4 months of English language exposure, and half had completed 1 year and 4 months of exposure to English. Data were obtained from the Test of Early Grammatical Impairment (Rice & Wexler, 2001) and from a narrative generated in English. By the 2nd year of formal exposure to English, children in the present study approximated 33% accuracy of tensed verbs in a formal testing context versus 61% in a narrative context. The use of the English verb BE approximated mastery. Predictors of English third-person singular verb accuracy were task, grade, English expressive vocabulary, and lemma frequency. Verb tense accuracy was low across both groups, but a precocious mastery of BE was observed. The results of the present study suggest that speech-language pathologists may consider, in addition to an elicitation task, evaluating the use of verbs during narratives in bilingual Cantonese-English bilingual children.

  17. Dissociating Appraisals of Accuracy and Recollection in Autobiographical Remembering

    ERIC Educational Resources Information Center

    Scoboria, Alan; Pascal, Lisa

    2016-01-01

    Recent studies of metamemory appraisals implicated in autobiographical remembering have established distinct roles for judgments of occurrence, recollection, and accuracy for past events. In studies involving everyday remembering, measures of recollection and accuracy correlate highly (>.85). Thus although their measures are structurally…

  18. Podcasts as Tools in Introductory Environmental Studies

    PubMed Central

    Vatovec, Christine; Balser, Teri

    2009-01-01

    Technological tools have increasingly become a part of the college classroom, often appealing to teachers because of their potential to increase student engagement with course materials. Podcasts in particular have gained popularity as tools to better inform students by providing access to lectures outside of the classroom. In this paper, we argue that educators should expand course materials to include prepublished podcasts to engage students with both course topics and a broader skill set for evaluating readily available media. We present a pre- and postassignment survey evaluation assessing student preferences for using podcasts and the ability of a podcast assignment to support learning objectives in an introductory environmental studies course. Overall, students reported that the podcasts were useful tools for learning, easy to use, and increased their understanding of course topics. However, students also provided insightful comments on visual versus aural learning styles, leading us to recommend assigning video podcasts or providing text-based transcripts along with audio podcasts. A qualitative analysis of survey data provides evidence that the podcast assignment supported the course learning objective for students to demonstrate critical evaluation of media messages. Finally, we provide recommendations for selecting published podcasts and designing podcast assignments. PMID:23653686

  19. Accuracy assessment of minimum control points for UAV photography and georeferencing

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Procopiou, E.; Stavrou, G.; Gregoriou, M.

    2013-08-01

    In recent years, Autonomous Unmanned Aerial Vehicles (AUAV) became popular among researchers across disciplines because they combine many advantages. One major application is monitoring and mapping. Their ability to fly beyond eye sight autonomously, collecting data over large areas whenever, wherever, makes them excellent platform for monitoring hazardous areas or disasters. In both cases rapid mapping is needed while human access isn't always a given. Indeed, current automatic processing of aerial photos using photogrammetry and computer vision algorithms allows for rapid orthophomap production and Digital Surface Model (DSM) generation, as tools for monitoring and damage assessment. In such cases, control point measurement using GPS is either impossible, or time consuming or costly. This work investigates accuracies that can be attained using few or none control points over areas of one square kilometer, in two test sites; a typical block and a corridor survey. On board GPS data logged during AUAV's flight are being used for direct georeferencing, while ground check points are being used for evaluation. In addition various control point layouts are being tested using bundle adjustment for accuracy evaluation. Results indicate that it is possible to use on board single frequency GPS for direct georeferencing in cases of disaster management or areas without easy access, or even over featureless areas. Due to large numbers of tie points in the bundle adjustment, horizontal accuracy can be fulfilled with a rather small number of control points, but vertical accuracy may not.

  20. Influence of Pedometer Position on Pedometer Accuracy at Various Walking Speeds: A Comparative Study

    PubMed Central

    Lovis, Christian

    2016-01-01

    Background Demographic growth in conjunction with the rise of chronic diseases is increasing the pressure on health care systems in most OECD countries. Physical activity is known to be an essential factor in improving or maintaining good health. Walking is especially recommended, as it is an activity that can easily be performed by most people without constraints. Pedometers have been extensively used as an incentive to motivate people to become more active. However, a recognized problem with these devices is their diminishing accuracy associated with decreased walking speed. The arrival on the consumer market of new devices, worn indifferently either at the waist, wrist, or as a necklace, gives rise to new questions regarding their accuracy at these different positions. Objective Our objective was to assess the performance of 4 pedometers (iHealth activity monitor, Withings Pulse O2, Misfit Shine, and Garmin vívofit) and compare their accuracy according to their position worn, and at various walking speeds. Methods We conducted this study in a controlled environment with 21 healthy adults required to walk 100 m at 3 different paces (0.4 m/s, 0.6 m/s, and 0.8 m/s) regulated by means of a string attached between their legs at the level of their ankles and a metronome ticking the cadence. To obtain baseline values, we asked the participants to walk 200 m at their own pace. Results A decrease of accuracy was positively correlated with reduced speed for all pedometers (12% mean error at self-selected pace, 27% mean error at 0.8 m/s, 52% mean error at 0.6 m/s, and 76% mean error at 0.4 m/s). Although the position of the pedometer on the person did not significantly influence its accuracy, some interesting tendencies can be highlighted in 2 settings: (1) positioning the pedometer at the waist at a speed greater than 0.8 m/s or as a necklace at preferred speed tended to produce lower mean errors than at the wrist position; and (2) at a slow speed (0.4 m/s), pedometers

  1. Upgrade Summer Severe Weather Tool

    NASA Technical Reports Server (NTRS)

    Watson, Leela

    2011-01-01

    The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.

  2. Using the red/yellow/green discharge tool to improve the timeliness of hospital discharges.

    PubMed

    Mathews, Kusum S; Corso, Philip; Bacon, Sandra; Jenq, Grace Y

    2014-06-01

    As part of Yale-New Haven Hospital (Connecticut)'s Safe Patient Flow Initiative, the physician leadership developed the Red/Yellow/Green (RYG) Discharge Tool, an electronic medical record-based prompt to identify likelihood of patients' next-day discharge: green (very likely), yellow (possibly), and red (unlikely). The tool's purpose was to enhance communication with nursing/care coordination and trigger earlier discharge steps for patients identified as "green" or "yellow." Data on discharge assignments, discharge dates/ times, and team designation were collected for all adult medicine patients discharged in October-December 2009 (Study Period 1) and October-December 2011 (Study Period 2), between which the tool's placement changed from the sign-out note to the daily progress note. In Study Period 1, 75.9% of the patients had discharge assignments, compared with 90.8% in Period 2 (p < .001). The overall 11 A.M. discharge rate improved from 10.4% to 21.2% from 2007 to 2011. "Green" patients were more likely to be discharged before 11 A.M. than "yellow" or "red" patients (p < .001). Patients with RYG assignments discharged by 11 A.M. had a lower length of stay than those without assignments and did not have an associated increased risk of readmission. Discharge prediction accuracy worsened after the change in placement, decreasing from 75.1% to 59.1% for "green" patients (p < .001), and from 34.5% to 29.2% (p < .001) for "yellow" patients. In both periods, hospitalists were more accurate than house staff in discharge predictions, suggesting that education and/or experience may contribute to discharge assignment. The RYG Discharge Tool helped facilitate earlier discharges, but accuracy depends on placement in daily work flow and experience.

  3. Who Should Mark What? A Study of Factors Affecting Marking Accuracy in a Biology Examination

    ERIC Educational Resources Information Center

    Suto, Irenka; Nadas, Rita; Bell, John

    2011-01-01

    Accurate marking is crucial to the reliability and validity of public examinations, in England and internationally. Factors contributing to accuracy have been conceptualised as affecting either marking task demands or markers' personal expertise. The aim of this empirical study was to develop this conceptualisation through investigating the…

  4. Researchermap: a tool for visualizing author locations using Google maps.

    PubMed

    Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong

    2013-01-01

    We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.

  5. A CT-based software tool for evaluating compensator quality in passively scattered proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald

    2010-11-01

    We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.

  6. Accuracy of magnetic resonance enterography in the preoperative assessment of patients with Crohn's disease of the small bowel.

    PubMed

    Pous-Serrano, S; Frasson, M; Palasí Giménez, R; Sanchez-Jordá, G; Pamies-Guilabert, J; Llavador Ros, M; Nos Mateu, P; Garcia-Granero, E

    2017-05-01

    To assess the accuracy of magnetic resonance enterography in predicting the extension, location and characteristics of the small bowel segments affected by Crohn's disease. This is a prospective study including a consecutive series of 38 patients with Crohn's disease of the small bowel who underwent surgery at a specialized colorectal unit of a tertiary hospital. Preoperative magnetic resonance enterography was performed in all patients, following a homogeneous protocol, within the 3 months prior to surgery. A thorough exploration of the small bowel was performed during the surgical procedure; calibration spheres were used according to the discretion of the surgeon. The accuracy of magnetic resonance enterography in detecting areas affected by Crohn's disease in the small bowel was assessed. The findings of magnetic resonance enterography were compared with surgical and pathological findings. Thirty-eight patients with 81 lesions were included in the study. During surgery, 12 lesions (14.8%) that were not described on magnetic resonance enterography were found. Seven of these were detected exclusively by the use of calibration spheres, passing unnoticed at surgical exploration. Magnetic resonance enterography had 90% accuracy in detecting the location of the stenosis (75.0% sensitivity, 95.7% specificity). Magnetic resonance enterography did not precisely diagnose the presence of an inflammatory phlegmon (accuracy 46.2%), but it was more accurate in detecting abscesses or fistulas (accuracy 89.9% and 98.6%, respectively). Magnetic resonance enterography is a useful tool in the preoperative assessment of patients with Crohn's disease. However, a thorough intra-operative exploration of the entire small bowel is still necessary. Colorectal Disease © 2017 The Association of Coloproctology of Great Britain and Ireland.

  7. Creation of a simple natural language processing tool to support an imaging utilization quality dashboard.

    PubMed

    Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo

    2017-05-01

    Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http

  8. Accuracy of Recalled Body Weight – A Study with 20-years of Follow-up

    PubMed Central

    Dahl, Anna K; Reynolds, Chandra A

    2013-01-01

    Objective Weight changes may be an important indicator of an ongoing pathological process. Retrospective self-report might be the only possibility to capture prior weight. The objective of the study was to evaluate the accuracy of retrospective recall of body weight in old age and factors that might predict accuracy. Design and Methods In 2007, 646 participants (mean age, 71.6 years) of the Swedish Adoption/Twin Study of Aging (SATSA) answered questions about their present weight and how much they weighed 20-years ago. Of these, 436 had self-reported their weight twenty years earlier and among these 134 had also had their weight assessed at this time point. Results Twenty year retrospectively recalled weight underestimated the prior assessed weight by −1.89 ± 5.9 kg and underestimated prior self-reported weight by −0.55 ± 5.2 kg. Moreover, 82.4% of the sample were accurate within 10%, and 45.8% were accurate within 5% of their prior assessed weights; similarly, 84.2% and 58.0 % were accurate within 10% and 5% respectively, for prior self-reported weight. Current higher body mass index and preferences of reporting weights ending with zero or five was associated with an underestimation of prior weight, while greater weight change over 20 year, and low Mini-Mental State Scores (MMSE) (<25) led to an overestimation of prior weight. Conclusions Recalled weight comes close to the assessed population mean, but at the individual level there is a large variation. The accuracy is affected by current BMI, changes in weight, end-digit preferences, and current cognitive ability. Recalled weight should be used with caution. PMID:23913738

  9. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  10. Limits to the Evaluation of the Accuracy of Continuous Glucose Monitoring Systems by Clinical Trials.

    PubMed

    Schrangl, Patrick; Reiterer, Florian; Heinemann, Lutz; Freckmann, Guido; Del Re, Luigi

    2018-05-18

    Systems for continuous glucose monitoring (CGM) are evolving quickly, and the data obtained are expected to become the basis for clinical decisions for many patients with diabetes in the near future. However, this requires that their analytical accuracy is sufficient. This accuracy is usually determined with clinical studies by comparing the data obtained by the given CGM system with blood glucose (BG) point measurements made with a so-called reference method. The latter is assumed to indicate the correct value of the target quantity. Unfortunately, due to the nature of the clinical trials and the approach used, such a comparison is subject to several effects which may lead to misleading results. While some reasons for the differences between the values obtained with CGM and BG point measurements are relatively well-known (e.g., measurement in different body compartments), others related to the clinical study protocols are less visible, but also quite important. In this review, we present a general picture of the topic as well as tools which allow to correct or at least to estimate the uncertainty of measures of CGM system performance.

  11. The diagnostic test accuracy of magnetic resonance imaging, magnetic resonance arthrography and computer tomography in the detection of chondral lesions of the hip.

    PubMed

    Smith, Toby O; Simpson, Michael; Ejindu, Vivian; Hing, Caroline B

    2013-04-01

    The purpose of this study was to assess the diagnostic test accuracy of magnetic resonance imaging (MRI), magnetic resonance arthrography (MRA) and multidetector arrays in CT arthrography (MDCT) for assessing chondral lesions in the hip joint. A review of the published and unpublished literature databases was performed to identify all studies reporting the diagnostic test accuracy (sensitivity/specificity) of MRI, MRA or MDCT for the assessment of adults with chondral (cartilage) lesions of the hip with surgical comparison (arthroscopic or open) as the reference test. All included studies were reviewed using the quality assessment of diagnostic accuracy studies appraisal tool. Pooled sensitivity, specificity, likelihood ratios and diagnostic odds ratios were calculated with 95 % confidence intervals using a random-effects meta-analysis for MRI, MRA and MDCT imaging. Eighteen studies satisfied the eligibility criteria. These included 648 hips from 637 patients. MRI indicated a pooled sensitivity of 0.59 (95 % CI: 0.49-0.70) and specificity of 0.94 (95 % CI: 0.90-0.97), and MRA sensitivity and specificity values were 0.62 (95 % CI: 0.57-0.66) and 0.86 (95 % CI: 0.83-0.89), respectively. The diagnostic test accuracy for the detection of hip joint cartilage lesions is currently superior for MRI compared with MRA. There were insufficient data to perform meta-analysis for MDCT or CTA protocols. Based on the current limited diagnostic test accuracy of the use of magnetic resonance or CT, arthroscopy remains the most accurate method of assessing chondral lesions in the hip joint.

  12. Accuracy of digital impressions of multiple dental implants: an in vitro study.

    PubMed

    Vandeweghe, Stefan; Vervack, Valentin; Dierens, Melissa; De Bruyn, Hugo

    2017-06-01

    Studies demonstrated that the accuracy of intra-oral scanners can be compared with conventional impressions for most indications. However, little is known about their applicability to take impressions of multiple implants. The aim of this study was to evaluate the accuracy of four intra-oral scanners when applied for implant impressions in the edentulous jaw. An acrylic mandibular cast containing six external connection implants (region 36, 34, 32, 42, 44 and 46) with PEEK scanbodies was scanned using four intra-oral scanners: the Lava C.O.S. and the 3M True Definition, Cerec Omnicam and 3Shape Trios. Each model was scanned 10 times with every intra-oral scanner. As a reference, a highly accurate laboratory scanner (104i, Imetric, Courgenay, Switzerland) was used. The scans were imported into metrology software (Geomagic Qualify 12) for analyses. Accuracy was measured in terms of trueness (comparing test and reference) and precision (determining the deviation between different test scans). Mann-Whitney U-test and Wilcoxon signed rank test were used to detect statistically significant differences in trueness and precision respectively. The mean trueness was 0.112 mm for Lava COS, 0.035 mm for 3M TrueDef, 0.028 mm for Trios and 0.061 mm for Cerec Omnicam. There was no statistically significant difference between 3M TrueDef and Trios (P = 0.262). Cerec Omnicam was less accurate than 3M TrueDef (P = 0.013) and Trios (P = 0.005), but more accurate compared to Lava COS (P = 0.007). Lava COS was also less accurate compared to 3M TrueDef (P = 0.005) and Trios (P = 0.005). The mean precision was 0.066 mm for Lava COS, 0.030 mm for 3M TrueDef, 0.033 mm for Trios and 0.059 mm for Cerec Omnicam. There was no statistically significant difference between 3M TrueDef and Trios (P = 0.119). Cerec Omnicam was less accurate compared to 3M TrueDef (P < 0.001) and Trios (P < 0.001), but no difference was found with Lava COS (P = 0.169). Lava COS was also

  13. Textbook-Bundled Metacognitive Tools: A Study of LearnSmart's Efficacy in General Chemistry

    ERIC Educational Resources Information Center

    Thadani, Vandana; Bouvier-Brown, Nicole C.

    2016-01-01

    College textbook publishers increasingly bundle sophisticated technology-based study tools with their texts. These tools appear promising, but empirical work on their efficacy is needed. We examined whether LearnSmart, a study tool bundled with McGraw-Hill's textbook "Chemistry" (Chang & Goldsby, 2013), improved learning in an…

  14. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.

  15. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  16. Response Latency as a Predictor of the Accuracy of Children's Reports

    ERIC Educational Resources Information Center

    Ackerman, Rakefet; Koriat, Asher

    2011-01-01

    Researchers have explored various diagnostic cues to the accuracy of information provided by child eyewitnesses. Previous studies indicated that children's confidence in their reports predicts the relative accuracy of these reports, and that the confidence-accuracy relationship generally improves as children grow older. In this study, we examined…

  17. Study on effect of tool electrodes on surface finish during electrical discharge machining of Nitinol

    NASA Astrophysics Data System (ADS)

    Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba

    2018-03-01

    Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.

  18. The prevalence of depression and the accuracy of depression screening tools in migraine patients.

    PubMed

    Amoozegar, Farnaz; Patten, Scott B; Becker, Werner J; Bulloch, Andrew G M; Fiest, Kirsten M; Davenport, W Jeptha; Carroll, Christopher R; Jette, Nathalie

    2017-09-01

    Migraine and depression are common comorbid conditions. The purpose of this study was to assess how well the Patient Health Questionnaire (PHQ-9) and the Hospital Anxiety and Depression Scale (HADS) perform as depression screening tools in patients with migraine. Three hundred consecutive migraine patients were recruited from a large headache center. The PHQ-9 and HADS were self-administered and validated against the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders-IV, a gold standard for the diagnosis of depression. Sensitivity, specificity, positive predictive value, negative predictive value and receiver-operator characteristic curves were calculated for the PHQ-9 and HADS. At the traditional cut-point of 10, the PHQ-9 demonstrated 82.0% sensitivity and 79.9% specificity. At a cut-point of 8, the HADS demonstrated 86.5% sensitivity and specificity. The PHQ-9 algorithm performed poorly (53.8% sensitivity, 94.9% specificity). The point prevalence of depression in this study was 25.0% (95% CI 19.0-31.0), and 17.0% of patients had untreated depression. In this study, the PHQ-9 and HADS performed well in migraine patients attending a headache clinic, but optimal cut-points to screen for depression vary depending on the goals of the assessment. Also, migraine patients attending a headache clinic have a high prevalence of depression and many are inadequately treated. Future studies are needed to confirm these findings and to evaluate the impact of depression screening. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Software project management tools in global software development: a systematic mapping study.

    PubMed

    Chadli, Saad Yasser; Idri, Ali; Ros, Joaquín Nicolás; Fernández-Alemán, José Luis; de Gea, Juan M Carrillo; Toval, Ambrosio

    2016-01-01

    Global software development (GSD) which is a growing trend in the software industry is characterized by a highly distributed environment. Performing software project management (SPM) in such conditions implies the need to overcome new limitations resulting from cultural, temporal and geographic separation. The aim of this research is to discover and classify the various tools mentioned in literature that provide GSD project managers with support and to identify in what way they support group interaction. A systematic mapping study has been performed by means of automatic searches in five sources. We have then synthesized the data extracted and presented the results of this study. A total of 102 tools were identified as being used in SPM activities in GSD. We have classified these tools, according to the software life cycle process on which they focus and how they support the 3C collaboration model (communication, coordination and cooperation). The majority of the tools found are standalone tools (77%). A small number of platforms (8%) also offer a set of interacting tools that cover the software development lifecycle. Results also indicate that SPM areas in GSD are not adequately supported by corresponding tools and deserve more attention from tool builders.

  20. Genome-wide association study and accuracy of genomic prediction for teat number in Duroc pigs using genotyping-by-sequencing.

    PubMed

    Tan, Cheng; Wu, Zhenfang; Ren, Jiangli; Huang, Zhuolin; Liu, Dewu; He, Xiaoyan; Prakapenka, Dzianis; Zhang, Ran; Li, Ning; Da, Yang; Hu, Xiaoxiang

    2017-03-29

    The number of teats in pigs is related to a sow's ability to rear piglets to weaning age. Several studies have identified genes and genomic regions that affect teat number in swine but few common results were reported. The objective of this study was to identify genetic factors that affect teat number in pigs, evaluate the accuracy of genomic prediction, and evaluate the contribution of significant genes and genomic regions to genomic broad-sense heritability and prediction accuracy using 41,108 autosomal single nucleotide polymorphisms (SNPs) from genotyping-by-sequencing on 2936 Duroc boars. Narrow-sense heritability and dominance heritability of teat number estimated by genomic restricted maximum likelihood were 0.365 ± 0.030 and 0.035 ± 0.019, respectively. The accuracy of genomic predictions, calculated as the average correlation between the genomic best linear unbiased prediction and phenotype in a tenfold validation study, was 0.437 ± 0.064 for the model with additive and dominance effects and 0.435 ± 0.064 for the model with additive effects only. Genome-wide association studies (GWAS) using three methods of analysis identified 85 significant SNP effects for teat number on chromosomes 1, 6, 7, 10, 11, 12 and 14. The region between 102.9 and 106.0 Mb on chromosome 7, which was reported in several studies, had the most significant SNP effects in or near the PTGR2, FAM161B, LIN52, VRTN, FCF1, AREL1 and LRRC74A genes. This region accounted for 10.0% of the genomic additive heritability and 8.0% of the accuracy of prediction. The second most significant chromosome region not reported by previous GWAS was the region between 77.7 and 79.7 Mb on chromosome 11, where SNPs in the FGF14 gene had the most significant effect and accounted for 5.1% of the genomic additive heritability and 5.2% of the accuracy of prediction. The 85 significant SNPs accounted for 28.5 to 28.8% of the genomic additive heritability and 35.8 to 36.8% of the accuracy of

  1. GAPIT: genome association and prediction integrated tool.

    PubMed

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  2. Continuous electroencephalography predicts delayed cerebral ischemia after subarachnoid hemorrhage: A prospective study of diagnostic accuracy.

    PubMed

    Rosenthal, Eric S; Biswal, Siddharth; Zafar, Sahar F; O'Connor, Kathryn L; Bechek, Sophia; Shenoy, Apeksha V; Boyle, Emily J; Shafi, Mouhsin M; Gilmore, Emily J; Foreman, Brandon P; Gaspard, Nicolas; Leslie-Mazwi, Thabele M; Rosand, Jonathan; Hoch, Daniel B; Ayata, Cenk; Cash, Sydney S; Cole, Andrew J; Patel, Aman B; Westover, M Brandon

    2018-04-16

    Delayed cerebral ischemia (DCI) is a common, disabling complication of subarachnoid hemorrhage (SAH). Preventing DCI is a key focus of neurocritical care, but interventions carry risk and cannot be applied indiscriminately. Although retrospective studies have identified continuous electroencephalographic (cEEG) measures associated with DCI, no study has characterized the accuracy of cEEG with sufficient rigor to justify using it to triage patients to interventions or clinical trials. We therefore prospectively assessed the accuracy of cEEG for predicting DCI, following the Standards for Reporting Diagnostic Accuracy Studies. We prospectively performed cEEG in nontraumatic, high-grade SAH patients at a single institution. The index test consisted of clinical neurophysiologists prospectively reporting prespecified EEG alarms: (1) decreasing relative alpha variability, (2) decreasing alpha-delta ratio, (3) worsening focal slowing, or (4) late appearing epileptiform abnormalities. The diagnostic reference standard was DCI determined by blinded, adjudicated review. Primary outcome measures were sensitivity and specificity of cEEG for subsequent DCI, determined by multistate survival analysis, adjusted for baseline risk. One hundred three of 227 consecutive patients were eligible and underwent cEEG monitoring (7.7-day mean duration). EEG alarms occurred in 96.2% of patients with and 19.6% without subsequent DCI (1.9-day median latency, interquartile range = 0.9-4.1). Among alarm subtypes, late onset epileptiform abnormalities had the highest predictive value. Prespecified EEG findings predicted DCI among patients with low (91% sensitivity, 83% specificity) and high (95% sensitivity, 77% specificity) baseline risk. cEEG accurately predicts DCI following SAH and may help target therapies to patients at highest risk of secondary brain injury. Ann Neurol 2018. © 2018 American Neurological Association.

  3. Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.

    2018-05-01

    Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.

  4. Clinical use of the Surgeon General's "My Family Health Portrait" (MFHP) tool: opinions of future health care providers.

    PubMed

    Owens, Kailey M; Marvin, Monica L; Gelehrter, Thomas D; Ruffin, Mack T; Uhlmann, Wendy R

    2011-10-01

    This study examined medical students' and house officers' opinions about the Surgeon General's "My Family Health Portrait" (MFHP) tool. Participants used the tool and were surveyed about tool mechanics, potential clinical uses, and barriers. None of the 97 participants had previously used this tool. The average time to enter a family history was 15 min (range 3 to 45 min). Participants agreed or strongly agreed that the MFHP tool is understandable (98%), easy to use (93%), and suitable for general public use (84%). Sixty-seven percent would encourage their patients to use the tool; 39% would ensure staff assistance. Participants would use the tool to identify patients at increased risk for disease (86%), record family history in the medical chart (84%), recommend preventive health behaviors (80%), and refer to genetics services (72%). Concerns about use of the tool included patient access, information accuracy, technical challenges, and the need for physician education on interpreting family history information.

  5. Enhancement of accuracy in shape sensing of surgical needles using optical frequency domain reflectometry in optical fibers.

    PubMed

    Parent, Francois; Loranger, Sebastien; Mandal, Koushik Kanti; Iezzi, Victor Lambin; Lapointe, Jerome; Boisvert, Jean-Sébastien; Baiad, Mohamed Diaa; Kadoury, Samuel; Kashyap, Raman

    2017-04-01

    We demonstrate a novel approach to enhance the precision of surgical needle shape tracking based on distributed strain sensing using optical frequency domain reflectometry (OFDR). The precision enhancement is provided by using optical fibers with high scattering properties. Shape tracking of surgical tools using strain sensing properties of optical fibers has seen increased attention in recent years. Most of the investigations made in this field use fiber Bragg gratings (FBG), which can be used as discrete or quasi-distributed strain sensors. By using a truly distributed sensing approach (OFDR), preliminary results show that the attainable accuracy is comparable to accuracies reported in the literature using FBG sensors for tracking applications (~1mm). We propose a technique that enhanced our accuracy by 47% using UV exposed fibers, which have higher light scattering compared to un-exposed standard single mode fibers. Improving the experimental setup will enhance the accuracy provided by shape tracking using OFDR and will contribute significantly to clinical applications.

  6. Assessment of accuracy and recognition of three-dimensional computerized forensic craniofacial reconstruction.

    PubMed

    Miranda, Geraldo Elias; Wilkinson, Caroline; Roughley, Mark; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff

    2018-01-01

    Facial reconstruction is a technique that aims to reproduce the individual facial characteristics based on interpretation of the skull, with the objective of recognition leading to identification. The aim of this paper was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography (CT) data from live subjects. Four CCFRs were produced by one of the researchers, who was provided with information concerning the age, sex, and ethnic group of each subject. The CCFRs were produced using Blender® with 3D models obtained from the CT data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® recognition tool with a frontal standardized photography, images of the subject CT face model and the CCFR. Soft-tissue depth and nose, ears and mouth were based on published data, observing Brazilian facial parameters. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63% and 74% with a distance -2.5 ≤ x ≤ 2.5 mm from the skin surface. The average distances were 1.66 to 0.33 mm and greater distances were observed around the eyes, cheeks, mental and zygomatic regions. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications.

  7. Assessment of accuracy and recognition of three-dimensional computerized forensic craniofacial reconstruction

    PubMed Central

    Wilkinson, Caroline; Roughley, Mark; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff

    2018-01-01

    Facial reconstruction is a technique that aims to reproduce the individual facial characteristics based on interpretation of the skull, with the objective of recognition leading to identification. The aim of this paper was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography (CT) data from live subjects. Four CCFRs were produced by one of the researchers, who was provided with information concerning the age, sex, and ethnic group of each subject. The CCFRs were produced using Blender® with 3D models obtained from the CT data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® recognition tool with a frontal standardized photography, images of the subject CT face model and the CCFR. Soft-tissue depth and nose, ears and mouth were based on published data, observing Brazilian facial parameters. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63% and 74% with a distance -2.5 ≤ x ≤ 2.5 mm from the skin surface. The average distances were 1.66 to 0.33 mm and greater distances were observed around the eyes, cheeks, mental and zygomatic regions. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications. PMID:29718983

  8. Improving coding accuracy in an academic practice.

    PubMed

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  9. Overview of the Development for a Suite of Low-Thrust Trajectory Analysis Tools

    NASA Technical Reports Server (NTRS)

    Kos, Larry D.; Polsgrove, Tara; Hopkins, Randall; Thomas, Dan; Sims, Jon A.

    2006-01-01

    A NASA intercenter team has developed a suite of low-thrust trajectory analysis tools to make a significant improvement in three major facets of low-thrust trajectory and mission analysis. These are: 1) ease of use, 2) ability to more robustly converge to solutions, and 3) higher fidelity modeling and accuracy of results. Due mostly to the short duration of the development, the team concluded that a suite of tools was preferred over having one integrated tool. This tool-suite, their characteristics, and their applicability will be described. Trajectory analysts can read this paper and determine which tool is most appropriate for their problem.

  10. A literature review of anthropometric studies of school students for ergonomics purposes: Are accuracy, precision and reliability being considered?

    PubMed

    Bravo, G; Bragança, S; Arezes, P M; Molenbroek, J F M; Castellucci, H I

    2018-05-22

    Despite offering many benefits, direct manual anthropometric measurement method can be problematic due to their vulnerability to measurement errors. The purpose of this literature review was to determine, whether or not the currently published anthropometric studies of school children, related to ergonomics, mentioned or evaluated the variables precision, reliability or accuracy in the direct manual measurement method. Two bibliographic databases, and the bibliographic references of all the selected papers were used for finding relevant published papers in the fields considered in this study. Forty-six (46) studies met the criteria previously defined for this literature review. However, only ten (10) studies mentioned at least one of the analyzed variables, and none has evaluated all of them. Only reliability was assessed by three papers. Moreover, in what regards the factors that affect precision, reliability and accuracy, the reviewed papers presented large differences. This was particularly clear in the instruments used for the measurements, which were not consistent throughout the studies. Additionally, it was also clear that there was a lack of information regarding the evaluators' training and procedures for anthropometric data collection, which are assumed to be the most important issues that affect precision, reliability and accuracy. Based on the review of the literature, it was possible to conclude that the considered anthropometric studies had not focused their attention to the analysis of precision, reliability and accuracy of the manual measurement methods. Hence, and with the aim of avoiding measurement errors and misleading data, anthropometric studies should put more efforts and care on testing measurement error and defining the procedures used to collect anthropometric data.

  11. Critical brain regions for tool-related and imitative actions: a componential analysis

    PubMed Central

    Shapiro, Allison D.; Coslett, H. Branch

    2014-01-01

    Numerous functional neuroimaging studies suggest that widespread bilateral parietal, temporal, and frontal regions are involved in tool-related and pantomimed gesture performance, but the role of these regions in specific aspects of gestural tasks remains unclear. In the largest prospective study of apraxia-related lesions to date, we performed voxel-based lesion–symptom mapping with data from 71 left hemisphere stroke participants to assess the critical neural substrates of three types of actions: gestures produced in response to viewed tools, imitation of tool-specific gestures demonstrated by the examiner, and imitation of meaningless gestures. Thus, two of the three gesture types were tool-related, and two of the three were imitative, enabling pairwise comparisons designed to highlight commonalities and differences. Gestures were scored separately for postural (hand/arm positioning) and kinematic (amplitude/timing) accuracy. Lesioned voxels in the left posterior temporal gyrus were significantly associated with lower scores on the posture component for both of the tool-related gesture tasks. Poor performance on the kinematic component of all three gesture tasks was significantly associated with lesions in left inferior parietal and frontal regions. These data enable us to propose a componential neuroanatomic model of action that delineates the specific components required for different gestural action tasks. Thus, visual posture information and kinematic capacities are differentially critical to the three types of actions studied here: the kinematic aspect is particularly critical for imitation of meaningless movement, capacity for tool-action posture representations are particularly necessary for pantomimed gestures to the sight of tools, and both capacities inform imitation of tool-related movements. These distinctions enable us to advance traditional accounts of apraxia. PMID:24776969

  12. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  13. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  14. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  15. Assessment of the accuracy of pharmacy students' compounded solutions using vapor pressure osmometry.

    PubMed

    Kolling, William M; McPherson, Timothy B

    2013-04-12

    OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.

  16. SU-E-J-34: Clinical Evaluation of Targeting Accuracy and Tractogrphy Delineation of Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juh, R; Suh, T; Kim, Y

    2014-06-01

    Purpose: Focal radiosurgery is a common treatment modality for trigeminal neuralgia (TN), a neuropathic facial pain condition. Assessment of treatment effectiveness is primarily clinical, given the paucity of investigational tools to assess trigeminal nerve changes. The efficiency of radiosurgery is related to its highly precise targeting. We assessed clinically the targeting accuracy of radiosurgery with Gamma knife. We hypothesized that trigeminal tractography provides more information than 2D-MR imaging, allowing detection of unique, focal changes in the target area after radiosurgery. Methods: Sixteen TN patients (2 females, 4 male, average age 65.3 years) treated with Gamma Knife radiosurgery, 40 Gy/50% isodosemore » line underwent 1.5Tesla MR trigeminal nerve . Target accuracy was assessed from deviation of the coordinates of the target compared with the center of enhancement on post MRI. Radiation dose delivered at the borders of contrast enhancement was evaluated Results: The median deviation of the coordinates between the intended target and the center of contrast enhancement was within 1mm. The radiation doses fitting within the borders of the contrast enhancement the target ranged from 37.5 to 40 Gy. Trigeminal tractography accurately detected the radiosurgical target. Radiosurgery resulted in 47% drop in FA values at the target with no significant change in FA outside the target, suggesting that radiosurgery primarily affects myelin. Tractography was more sensitive, since FA changes were detected regardless of trigeminal nerve enhancement Conclusion: The median deviation found in clinical assessment of gamma knife treatment for TN Is low and compatible with its high rate of efficiency. DTI parameters accurately detect the effects of focal radiosurgery on the trigeminal nerve, serving as an in vivo imaging tool to study TN. This study is a proof of principle for further assessment of DTI parameters to understand the pathophysiology of TN and treatment

  17. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  18. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy.

    PubMed

    Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.

  19. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy

    PubMed Central

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819

  20. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.