ERIC Educational Resources Information Center
Wade, Ros; Corbett, Mark; Eastwood, Alison
2013-01-01
Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debono, Josephine C, E-mail: josephine.debono@bci.org.au; Poulos, Ann E; Westmead Breast Cancer Institute, Westmead, New South Wales
The aim of this study was to first evaluate the quality of studies investigating the diagnostic accuracy of radiographers as mammogram screen-readers and then to develop an adapted tool for determining the quality of screen-reading studies. A literature search was used to identify relevant studies and a quality evaluation tool constructed by combining the criteria for quality of Whiting, Rutjes, Dinnes et al. and Brealey and Westwood. This constructed tool was then applied to the studies and subsequently adapted specifically for use in evaluating quality in studies investigating diagnostic accuracy of screen-readers. Eleven studies were identified and the constructed toolmore » applied to evaluate quality. This evaluation resulted in the identification of quality issues with the studies such as potential for bias, applicability of results, study conduct, reporting of the study and observer characteristics. An assessment of the applicability and relevance of the tool for this area of research resulted in adaptations to the criteria and the development of a tool specifically for evaluating diagnostic accuracy in screen-reading. This tool, with further refinement and rigorous validation can make a significant contribution to promoting well-designed studies in this important area of research and practice.« less
Rice, Danielle B; Kloda, Lorie A; Shrier, Ian; Thombs, Brett D
2016-08-01
Meta-analyses that are conducted rigorously and reported completely and transparently can provide accurate evidence to inform the best possible healthcare decisions. Guideline makers have raised concerns about the utility of existing evidence on the diagnostic accuracy of depression screening tools. The objective of our study was to evaluate the transparency and completeness of reporting in meta-analyses of the diagnostic accuracy of depression screening tools using the PRISMA tool adapted for diagnostic test accuracy meta-analyses. We searched MEDLINE and PsycINFO from January 1, 2005 through March 13, 2016 for recent meta-analyses in any language on the diagnostic accuracy of depression screening tools. Two reviewers independently assessed the transparency in reporting using the PRISMA tool with appropriate adaptations made for studies of diagnostic test accuracy. We identified 21 eligible meta-analyses. Twelve of 21 meta-analyses complied with at least 50% of adapted PRISMA items. Of 30 adapted PRISMA items, 11 were fulfilled by ≥80% of included meta-analyses, 3 by 50-79% of meta-analyses, 7 by 25-45% of meta-analyses, and 9 by <25%. On average, post-PRISMA meta-analyses complied with 17 of 30 items compared to 13 of 30 items pre-PRISMA. Deficiencies in the transparency of reporting in meta-analyses of the diagnostic test accuracy of depression screening tools of meta-analyses were identified. Authors, reviewers, and editors should adhere to the PRISMA statement to improve the reporting of meta-analyses of the diagnostic accuracy of depression screening tools. Copyright © 2016 Elsevier Inc. All rights reserved.
Hartling, Lisa; Bond, Kenneth; Santaguida, P Lina; Viswanathan, Meera; Dryden, Donna M
2011-08-01
To develop and test a study design classification tool. We contacted relevant organizations and individuals to identify tools used to classify study designs and ranked these using predefined criteria. The highest ranked tool was a design algorithm developed, but no longer advocated, by the Cochrane Non-Randomized Studies Methods Group; this was modified to include additional study designs and decision points. We developed a reference classification for 30 studies; 6 testers applied the tool to these studies. Interrater reliability (Fleiss' κ) and accuracy against the reference classification were assessed. The tool was further revised and retested. Initial reliability was fair among the testers (κ=0.26) and the reference standard raters κ=0.33). Testing after revisions showed improved reliability (κ=0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study design was experimental (5 of 15 studies), and whether there was a comparison of any kind (4 of 15 studies). Agreement was higher among testers who had completed graduate level training versus those who had not. The moderate reliability and low accuracy may be because of lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in tester characteristics. The results may not be generalizable to all published studies, as the test studies were selected because they had posed challenges for previous reviewers with respect to their design classification. Application of such a tool should be accompanied by training, pilot testing, and context-specific decision rules. Copyright © 2011 Elsevier Inc. All rights reserved.
Reducing waste in evaluation studies on fall risk assessment tools for older people.
Meyer, Gabriele; Möhler, Ralph; Köpke, Sascha
2018-05-18
To critically appraise the recognition of methodological challenges in evaluation studies on assessment tools and nurses' clinical judgement on fall risk in older people and suggest how to reduce respective research waste. Opinion paper and narrative review covering systematic reviews on studies assessing diagnostic accuracy and impact of assessment tools and/or nurses' clinical judgement. Eighteen reviews published in the last 15 years were analysed. Only one reflects potentially important factors threatening the accuracy of assessments using delayed verification with fall events as reference after a certain period of time, i.e. natural course, preventive measures and treatment paradox where accurate assessment leads to prevention of falls, i.e. influencing the reference standard and falsely indicating low diagnostic accuracy. Also, only one review mentions randomised controlled trials as appropriate study design for the investigation of the impact of fall risk assessment tools on patient-important outcomes. Until now, only one randomised controlled trial dealing with this question has been performed showing no effect on falls and injuries. Instead of investigating the diagnostic accuracy of fall assessment tools, the focus of future research should be on the effectiveness of the implementation of fall assessment tools at reducing falls and injuries. Copyright © 2018. Published by Elsevier Inc.
Dall, PM; Coulter, EH; Fitzsimons, CF; Skelton, DA; Chastin, SFM
2017-01-01
Objective Sedentary behaviour (SB) has distinct deleterious health outcomes, yet there is no consensus on best practice for measurement. This study aimed to identify the optimal self-report tool for population surveillance of SB, using a systematic framework. Design A framework, TAxonomy of Self-reported Sedentary behaviour Tools (TASST), consisting of four domains (type of assessment, recall period, temporal unit and assessment period), was developed based on a systematic inventory of existing tools. The inventory was achieved through a systematic review of studies reporting SB and tracing back to the original description. A systematic review of the accuracy and sensitivity to change of these tools was then mapped against TASST domains. Data sources Systematic searches were conducted via EBSCO, reference lists and expert opinion. Eligibility criteria for selecting studies The inventory included tools measuring SB in adults that could be self-completed at one sitting, and excluded tools measuring SB in specific populations or contexts. The systematic review included studies reporting on the accuracy against an objective measure of SB and/or sensitivity to change of a tool in the inventory. Results The systematic review initially identified 32 distinct tools (141 questions), which were used to develop the TASST framework. Twenty-two studies evaluated accuracy and/or sensitivity to change representing only eight taxa. Assessing SB as a sum of behaviours and using a previous day recall were the most promising features of existing tools. Accuracy was poor for all existing tools, with underestimation and overestimation of SB. There was a lack of evidence about sensitivity to change. Conclusions Despite the limited evidence, mapping existing SB tools onto the TASST framework has enabled informed recommendations to be made about the most promising features for a surveillance tool, identified aspects on which future research and development of SB surveillance tools should focus. Trial registration number International prospective register of systematic reviews (PROPSPERO)/CRD42014009851. PMID:28391233
Huysentruyt, Koen; Devreker, Thierry; Dejonckheere, Joachim; De Schepper, Jean; Vandenplas, Yvan; Cools, Filip
2015-08-01
The aim of the present study was to evaluate the predictive accuracy of screening tools for assessing nutritional risk in hospitalized children in developed countries. The study involved a systematic review of literature (MEDLINE, EMBASE, and Cochrane Central databases up to January 17, 2014) of studies on the diagnostic performance of pediatric nutritional screening tools. Methodological quality was assessed using a modified QUADAS tool. Sensitivity and specificity were calculated for each screening tool per validation method. A meta-analysis was performed to estimate the risk ratio of different screening result categories of being truly at nutritional risk. A total of 11 studies were included on ≥1 of the following screening tools: Pediatric Nutritional Risk Score, Screening Tool for the Assessment of Malnutrition in Paediatrics, Paediatric Yorkhill Malnutrition Score, and Screening Tool for Risk on Nutritional Status and Growth. Because of variation in reference standards, a direct comparison of the predictive accuracy of the screening tools was not possible. A meta-analysis was performed on 1629 children from 7 different studies. The risk ratio of being truly at nutritional risk was 0.349 (95% confidence interval [CI] 0.16-0.78) for children in the low versus moderate screening category and 0.292 (95% CI 0.19-0.44) in the moderate versus high screening category. There is insufficient evidence to choose 1 nutritional screening tool over another based on their predictive accuracy. The estimated risk of being at "true nutritional risk" increases with each category of screening test result. Each screening category should be linked to a specific course of action, although further research is needed.
McGovern, Aine; Pendlebury, Sarah T; Mishra, Nishant K; Fan, Yuhua; Quinn, Terence J
2016-02-01
Poststroke cognitive assessment can be performed using standardized questionnaires designed for family or care givers. We sought to describe the test accuracy of such informant-based assessments for diagnosis of dementia/multidomain cognitive impairment in stroke. We performed a systematic review using a sensitive search strategy across multidisciplinary electronic databases. We created summary test accuracy metrics and described reporting and quality using STARDdem and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tools, respectively. From 1432 titles, we included 11 studies. Ten papers used the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Four studies described IQCODE for diagnosis of poststroke dementia (n=1197); summary sensitivity: 0.81 (95% confidence interval, 0.60-0.93); summary specificty: 0.83 (95% confidence interval, 0.64-0.93). Five studies described IQCODE as tool for predicting future dementia (n=837); summary sensitivity: 0.60 (95% confidence interval, 0.32-0.83); summary specificity: 0.97 (95% confidence interval, 0.70-1.00). All papers had issues with at least 1 aspect of study reporting or quality. There is a limited literature on informant cognitive assessments in stroke. IQCODE as a diagnostic tool has test properties similar to other screening tools, IQCODE as a prognostic tool is specific but insensitive. We found no papers describing test accuracy of informant tests for diagnosis of prestroke cognitive decline, few papers on poststroke dementia and all included papers had issues with potential bias. © 2015 American Heart Association, Inc.
A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.
McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S
2015-09-01
Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).
Håkonsen, Sasja Jul; Pedersen, Preben Ulrich; Bath-Hextall, Fiona; Kirkpatrick, Pamela
2015-05-15
Effective nutritional screening, nutritional care planning and nutritional support are essential in all settings, and there is no doubt that a health service seeking to increase safety and clinical effectiveness must take nutritional care seriously. Screening and early detection of malnutrition is crucial in identifying patients at nutritional risk. There is a high prevalence of malnutrition in hospitalized patients undergoing treatment for colorectal cancer. To synthesize the best available evidence regarding the diagnostic test accuracy of nutritional tools (sensitivity and specificity) used to identify malnutrition (specifically undernutrition) in patients with colorectal cancer (such as the Malnutrition Screening Tool and Nutritional Risk Index) compared to reference tests (such as the Subjective Global Assessment or Patient Generated Subjective Global Assessment). Patients with colorectal cancer requiring either (or all) surgery, chemotherapy and/or radiotherapy in secondary care. Focus of the review: The diagnostic test accuracy of validated assessment tools/instruments (such as the Malnutrition Screening Tool and Nutritional Risk Index) in the diagnosis of malnutrition (specifically under-nutrition) in patients with colorectal cancer, relative to reference tests (Subjective Global Assessment or Patient Generated Subjective Global Assessment). Types of studies: Diagnostic test accuracy studies regardless of study design. Studies published in English, German, Danish, Swedish and Norwegian were considered for inclusion in this review. Databases were searched from their inception to April 2014. Methodological quality was determined using the Quality Assessment of Diagnostic Accuracy Studies checklist. Data was collected using the data extraction form: the Standards for Reporting Studies of Diagnostic Accuracy checklist for the reporting of studies of diagnostic accuracy. The accuracy of diagnostic tests is presented in terms of sensitivity, specificity, positive and negative predictive values. In addition, the positive likelihood ratio (sensitivity/ [1 - specificity]) and negative likelihood ratio (1 - sensitivity)/ specificity), were also calculated and presented in this review to provide information about the likelihood that a given test result would be expected when the target condition is present compared with the likelihood that the same result would be expected when the condition is absent. Not all trials reported true positive, true negative, false positive and false negative rates, therefore these rates were calculated based on the data in the published papers. A two-by-two truth table was reconstructed for each study, and sensitivity, specificity, positive predictive value, negative predictive value positive likelihood ratio and negative likelihood ratio were calculated for each study. A summary receiver operator characteristics curve was constructed to determine the relationship between sensitivity and specificity, and the area under the summary receiver operator characteristics curve which measured the usefulness of a test was calculated. Meta-analysis was not considered appropriate, therefore data was synthesized in a narrative summary. 1. One study evaluated the Malnutrition Screening Tool against the reference standard Patient-Generated Subjective Global Assessment. The sensitivity was 56% and the specificity 84%. The positive likelihood ratio was 3.100, negative likelihood ratio was 0.59, the diagnostic odds ratio (CI 95%) was 5.20 (1.09-24.90) and the Area Under the Curve (AUC) represents only a poor to fair diagnostic test accuracy. A total of two studies evaluated the diagnostic accuracy of Malnutrition Universal Screening Tool (MUST) (index test) compared to both Subjective Global Assessment (SGA) (reference standard) and PG-SGA (reference standard) in patients with colorectal cancer. In MUST vs SGA the sensitivity of the tool was 96%, specificity was 75%, LR+ 3.826, LR- 0.058, diagnostic OR (CI 95%) 66.00 (6.61-659.24) and AUC represented excellent diagnostic accuracy. In MUST vs PG-SGA the sensitivity of the tool was 72%, specificity 48.9%, LR+ 1.382, LR- 0.579, diagnostic OR (CI 95%) 2.39 (0.87-6.58) and AUC indicated that the tool failed as a diagnostic test to identify patients with colorectal cancer at nutritional risk,. The Nutrition Risk Index (NRI) was compared to SGA representing a sensitivity of 95.2%, specificity of 62.5%, LR+ 2.521, LR- 0.087, diagnostic OR (CI 95%) 28.89 (6.93-120.40) and AUC represented good diagnostic accuracy. In regard to NRI vs PG-SGA the sensitivity of the tool was 68%, specificity 64%, LR+ 1.947, LR- 0.487, diagnostic OR (CI 95%) 4.00 (1.23-13.01) and AUC indicated poor diagnostic test accuracy. There are no single, specific tools used to screen or assess the nutritional status of colorectal cancer patients. All tools showed varied diagnostic accuracies when compared to the reference standards SGA and PG-SGA. Hence clinical judgment combined with perhaps the SGA or PG-SGA should play a major role. The PG-SGA offers several advantages over the SGA tool: 1) the patient completes the medical history component, thereby decreasing the amount of time involved; 2) it contains more nutrition impact symptoms, which are important to the patient with cancer; and 3) it has a scoring system that allows patients to be triaged for nutritional intervention. Therefore, the PG-SGA could be used as a nutrition assessment tool as it allows quick identification and prioritization of colorectal cancer patients with malnutrition in combination with other parameters. This systematic review highlights the need for the following: Further studies needs to investigate the diagnostic accuracy of already existing nutritional screening tools in the context of colorectal cancer patients. If new screenings tools are developed, they should be developed and validated in the specific clinical context within the same patient population (colorectal cancer patients). The Joanna Briggs Institute.
Continuous Glucose Monitoring and Trend Accuracy
Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey
2014-01-01
Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437
Diagnostic Accuracy of Fall Risk Assessment Tools in People With Diabetic Peripheral Neuropathy
Pohl, Patricia S.; Mahnken, Jonathan D.; Kluding, Patricia M.
2012-01-01
Background Diabetic peripheral neuropathy affects nearly half of individuals with diabetes and leads to increased fall risk. Evidence addressing fall risk assessment for these individuals is lacking. Objective The purpose of this study was to identify which of 4 functional mobility fall risk assessment tools best discriminates, in people with diabetic peripheral neuropathy, between recurrent “fallers” and those who are not recurrent fallers. Design A cross-sectional study was conducted. Setting The study was conducted in a medical research university setting. Participants The participants were a convenience sample of 36 individuals between 40 and 65 years of age with diabetic peripheral neuropathy. Measurements Fall history was assessed retrospectively and was the criterion standard. Fall risk was assessed using the Functional Reach Test, the Timed “Up & Go” Test, the Berg Balance Scale, and the Dynamic Gait Index. Sensitivity, specificity, positive and negative likelihood ratios, and overall diagnostic accuracy were calculated for each fall risk assessment tool. Receiver operating characteristic curves were used to estimate modified cutoff scores for each fall risk assessment tool; indexes then were recalculated. Results Ten of the 36 participants were classified as recurrent fallers. When traditional cutoff scores were used, the Dynamic Gait Index and Functional Reach Test demonstrated the highest sensitivity at only 30%; the Dynamic Gait Index also demonstrated the highest overall diagnostic accuracy. When modified cutoff scores were used, all tools demonstrated improved sensitivity (80% or 90%). Overall diagnostic accuracy improved for all tests except the Functional Reach Test; the Timed “Up & Go” Test demonstrated the highest diagnostic accuracy at 88.9%. Limitations The small sample size and retrospective fall history assessment were limitations of the study. Conclusions Modified cutoff scores improved diagnostic accuracy for 3 of 4 fall risk assessment tools when testing people with diabetic peripheral neuropathy. PMID:22836004
Kane, Greg
2013-11-04
A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect's driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice.
Dall, P M; Coulter, E H; Fitzsimons, C F; Skelton, D A; Chastin, Sfm
2017-04-08
Sedentary behaviour (SB) has distinct deleterious health outcomes, yet there is no consensus on best practice for measurement. This study aimed to identify the optimal self-report tool for population surveillance of SB, using a systematic framework. A framework, TAxonomy of Self-reported Sedentary behaviour Tools (TASST), consisting of four domains (type of assessment, recall period, temporal unit and assessment period), was developed based on a systematic inventory of existing tools. The inventory was achieved through a systematic review of studies reporting SB and tracing back to the original description. A systematic review of the accuracy and sensitivity to change of these tools was then mapped against TASST domains. Systematic searches were conducted via EBSCO, reference lists and expert opinion. The inventory included tools measuring SB in adults that could be self-completed at one sitting, and excluded tools measuring SB in specific populations or contexts. The systematic review included studies reporting on the accuracy against an objective measure of SB and/or sensitivity to change of a tool in the inventory. The systematic review initially identified 32 distinct tools (141 questions), which were used to develop the TASST framework. Twenty-two studies evaluated accuracy and/or sensitivity to change representing only eight taxa. Assessing SB as a sum of behaviours and using a previous day recall were the most promising features of existing tools. Accuracy was poor for all existing tools, with underestimation and overestimation of SB. There was a lack of evidence about sensitivity to change. Despite the limited evidence, mapping existing SB tools onto the TASST framework has enabled informed recommendations to be made about the most promising features for a surveillance tool, identified aspects on which future research and development of SB surveillance tools should focus. International prospective register of systematic reviews (PROPSPERO)/CRD42014009851. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Identification of facilitators and barriers to residents' use of a clinical reasoning tool.
DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E
2018-03-28
While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer
Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less
Qu, Y J; Yang, Z R; Sun, F; Zhan, S Y
2018-04-10
This paper introduced the Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2), including the development and comparison with the original QUADAS, and illustrated the application of QUADAS-2 in a published paper related to the study on diagnostic accuracy which was included in systematic review and Meta-analysis. QUADAS-2 presented considerable improvement over the original tool. Confused items that included in QUADAS had disappeared and the quality assessment of the original study replaced by the rating of risk on bias and applicability. This was implemented through the description on the four main domains with minimal overlapping and answering the signal questions in each domain. The risk of bias and applicability with 'high','low' or 'unclear' was in line with the risk of bias assessment of intervention studies in Cochrane, so to replace the total score of quality assessment in QUADAS. Meanwhile, QUADAS-2 was also applicable to assess the diagnostic accuracy studies in which follow-up without prognosis was involved in golden standard. It was useful to assess the overall methodological quality of the study despite more time consuming than the original QUADAS. However, QUADAS-2 needs to be modified to apply in comparative studies on diagnostic accuracy and we hope the users would follow the updates and give their feedbacks on line.
2013-01-01
Background A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect’s driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Results Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. Conclusion The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice. PMID:24188398
Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R
2015-05-13
Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico tools. The combination of in silico tools with the best performance is gene-dependent. The in silico tools reported here may have some value in assessing variants in the KCNQ1 and KCNH2 genes, but caution should be taken when the analysis is applied to SCN5A gene variants.
NASA Astrophysics Data System (ADS)
Zhou, X.; Wang, G.; Yan, B.; Kearns, T.
2016-12-01
Terrestrial laser scanning (TLS) techniques have been proven to be efficient tools to collect three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, the processing and presenting of massive TLS data is always a challenge for research when targeting a large area with high-resolution. This article introduces a workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas in May 2015 were used for the case study. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth DEMs, and quantifying beach and sand dune features (shoreline, cross-dune section, dune ridge, toe, and volume) are presented in this article. According to this investigation, the accuracy of the laser measurements (distance from the scanner to the targets) is within a couple of centimeters. However, the positional accuracy of TLS points with respect to a global coordinate system is about 5 cm, which is dominated by the accuracy of GPS solutions for obtaining the positions of the scanner and reflector. The accuracy of TLS-derived bare-earth DEM is primarily determined by the size of grid cells and roughness of the terrain surface for the case study. A DEM with grid cells of 4m x 1m (shoreline by cross-shore) provides a suitable spatial resolution and accuracy for deriving major beach and dune features.
Evaluation of the accuracy of GPS as a method of locating traffic collisions.
DOT National Transportation Integrated Search
2004-06-01
The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...
Signal Detection Theory as a Tool for Successful Student Selection
ERIC Educational Resources Information Center
van Ooijen-van der Linden, Linda; van der Smagt, Maarten J.; Woertman, Liesbeth; te Pas, Susan F.
2017-01-01
Prediction accuracy of academic achievement for admission purposes requires adequate "sensitivity" and "specificity" of admission tools, yet the available information on the validity and predictive power of admission tools is largely based on studies using correlational and regression statistics. The goal of this study was to…
On aerodynamic wake analysis and its relation to total aerodynamic drag in a wind tunnel environment
NASA Astrophysics Data System (ADS)
Guterres, Rui M.
The present work was developed with the goal of advancing the state of the art in the application of three-dimensional wake data analysis to the quantification of aerodynamic drag on a body in a low speed wind tunnel environment. Analysis of the existing tools, their strengths and limitations is presented. Improvements to the existing analysis approaches were made. Software tools were developed to integrate the analysis into a practical tool. A comprehensive derivation of the equations needed for drag computations based on three dimensional separated wake data is developed. A set of complete steps ranging from the basic mathematical concept to the applicable engineering equations is presented. An extensive experimental study was conducted. Three representative body types were studied in varying ground effect conditions. A detailed qualitative wake analysis using wake imaging and two and three dimensional flow visualization was performed. Several significant features of the flow were identified and their relation to the total aerodynamic drag established. A comprehensive wake study of this type is shown to be in itself a powerful tool for the analysis of the wake aerodynamics and its relation to body drag. Quantitative wake analysis techniques were developed. Significant post processing and data conditioning tools and precision analysis were developed. The quality of the data is shown to be in direct correlation with the accuracy of the computed aerodynamic drag. Steps are taken to identify the sources of uncertainty. These are quantified when possible and the accuracy of the computed results is seen to significantly improve. When post processing alone does not resolve issues related to precision and accuracy, solutions are proposed. The improved quantitative wake analysis is applied to the wake data obtained. Guidelines are established that will lead to more successful implementation of these tools in future research programs. Close attention is paid to implementation of issues that are of crucial importance for the accuracy of the results and that are not detailed in the literature. The impact of ground effect on the flows in hand is qualitatively and quantitatively studied. Its impact on the accuracy of the computations as well as the wall drag incompatibility with the theoretical model followed are discussed. The newly developed quantitative analysis provides significantly increased accuracy. The aerodynamic drag coefficient is computed within one percent of balance measured value for the best cases.
75 FR 13289 - Agency Information Collection Request, 60-Day Public Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-19
... Tool Program Director 45 4 3 540 CPPW Cost Study Tool Business Manager 45 4 3 540 CPPW Cost Study Tool... performance of the agency's functions; (2) the accuracy of the estimated burden; (3) ways to enhance the... Prevention to Work Cost Study Instrument--OMB No. 0990-NEW- Office of the Assistant Secretary for Planning...
ERIC Educational Resources Information Center
Igoe, D. P.; Parisi, A. V.; Wagner, S.
2017-01-01
Smartphones used as tools provide opportunities for the teaching of the concepts of accuracy and precision and the mathematical concept of arctan. The accuracy and precision of a trigonometric experiment using entirely mechanical tools is compared to one using electronic tools, such as a smartphone clinometer application and a laser pointer. This…
Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar
2009-11-13
Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004-2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required authors to use STARD. Recently published diagnostic accuracy studies on commercial tests for TB, malaria and HIV have moderate to low quality and are poorly reported. The more frequent use of tools such as QUADAS and STARD may be necessary to improve the methodological and reporting quality of future diagnostic accuracy studies in infectious diseases.
Fontela, Patricia Scolari; Pant Pai, Nitika; Schiller, Ian; Dendukuri, Nandini; Ramsay, Andrew; Pai, Madhukar
2009-01-01
Background Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV. Methods We identified diagnostic accuracy studies of commercial tests for TB, malaria and HIV through a systematic search of the literature using PubMed and EMBASE (2004–2006). Original studies that reported sensitivity and specificity data were included. Two reviewers independently extracted data on study characteristics and diagnostic accuracy, and used QUADAS and STARD to evaluate the quality of methods and reporting, respectively. Findings Ninety (38%) of 238 articles met inclusion criteria. All studies had design deficiencies. Study quality indicators that were met in less than 25% of the studies included adequate description of withdrawals (6%) and reference test execution (10%), absence of index test review bias (19%) and reference test review bias (24%), and report of uninterpretable results (22%). In terms of quality of reporting, 9 STARD indicators were reported in less than 25% of the studies: methods for calculation and estimates of reproducibility (0%), adverse effects of the diagnostic tests (1%), estimates of diagnostic accuracy between subgroups (10%), distribution of severity of disease/other diagnoses (11%), number of eligible patients who did not participate in the study (14%), blinding of the test readers (16%), and description of the team executing the test and management of indeterminate/outlier results (both 17%). The use of STARD was not explicitly mentioned in any study. Only 22% of 46 journals that published the studies included in this review required authors to use STARD. Conclusion Recently published diagnostic accuracy studies on commercial tests for TB, malaria and HIV have moderate to low quality and are poorly reported. The more frequent use of tools such as QUADAS and STARD may be necessary to improve the methodological and reporting quality of future diagnostic accuracy studies in infectious diseases. PMID:19915664
The development of a quality appraisal tool for studies of diagnostic reliability (QAREL).
Lucas, Nicholas P; Macaskill, Petra; Irwig, Les; Bogduk, Nikolai
2010-08-01
In systematic reviews of the reliability of diagnostic tests, no quality assessment tool has been used consistently. The aim of this study was to develop a specific quality appraisal tool for studies of diagnostic reliability. Key principles for the quality of studies of diagnostic reliability were identified with reference to epidemiologic principles, existing quality appraisal checklists, and the Standards for Reporting of Diagnostic Accuracy (STARD) and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) resources. Specific items that encompassed each of the principles were developed. Experts in diagnostic research provided feedback on the items that were to form the appraisal tool. This process was iterative and continued until consensus among experts was reached. The Quality Appraisal of Reliability Studies (QAREL) checklist includes 11 items that explore seven principles. Items cover the spectrum of subjects, spectrum of examiners, examiner blinding, order effects of examination, suitability of the time interval among repeated measurements, appropriate test application and interpretation, and appropriate statistical analysis. QAREL has been developed as a specific quality appraisal tool for studies of diagnostic reliability. The reliability of this tool in different contexts needs to be evaluated. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, X.; Srinivasan, R.
2008-12-01
In this study, a user friendly GIS tool was developed for evaluating and improving NEXRAD using raingauge data. This GIS tool can automatically read in raingauge and NEXRAD data, evaluate the accuracy of NEXRAD for each time unit, implement several geostatistical methods to improve the accuracy of NEXRAD through raingauge data, and output spatial precipitation map for distributed hydrologic model. The geostatistical methods incorporated in this tool include Simple Kriging with varying local means, Kriging with External Drift, Regression Kriging, Co-Kriging, and a new geostatistical method that was newly developed by Li et al. (2008). This tool was applied in two test watersheds at hourly and daily temporal scale. The preliminary cross-validation results show that incorporating raingauge data to calibrate NEXRAD can pronouncedly change the spatial pattern of NEXRAD and improve its accuracy. Using different geostatistical methods, the GIS tool was applied to produce long term precipitation input for a distributed hydrologic model - Soil and Water Assessment Tool (SWAT). Animated video was generated to vividly illustrate the effect of using different precipitation input data on distributed hydrologic modeling. Currently, this GIS tool is developed as an extension of SWAT, which is used as water quantity and quality modeling tool by USDA and EPA. The flexible module based design of this tool also makes it easy to be adapted for other hydrologic models for hydrological modeling and water resources management.
2013-01-01
Background Early detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening. Methods We searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria. Results A total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%. Conclusions In 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs. PMID:24314318
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
Sentiment Analysis of Health Care Tweets: Review of the Methods Used.
Gohil, Sunir; Vuik, Sabine; Darzi, Ara
2018-04-23
Twitter is a microblogging service where users can send and read short 140-character messages called "tweets." There are several unstructured, free-text tweets relating to health care being shared on Twitter, which is becoming a popular area for health care research. Sentiment is a metric commonly used to investigate the positive or negative opinion within these messages. Exploring the methods used for sentiment analysis in Twitter health care research may allow us to better understand the options available for future research in this growing field. The first objective of this study was to understand which tools would be available for sentiment analysis of Twitter health care research, by reviewing existing studies in this area and the methods they used. The second objective was to determine which method would work best in the health care settings, by analyzing how the methods were used to answer specific health care questions, their production, and how their accuracy was analyzed. A review of the literature was conducted pertaining to Twitter and health care research, which used a quantitative method of sentiment analysis for the free-text messages (tweets). The study compared the types of tools used in each case and examined methods for tool production, tool training, and analysis of accuracy. A total of 12 papers studying the quantitative measurement of sentiment in the health care setting were found. More than half of these studies produced tools specifically for their research, 4 used open source tools available freely, and 2 used commercially available software. Moreover, 4 out of the 12 tools were trained using a smaller sample of the study's final data. The sentiment method was trained against, on an average, 0.45% (2816/627,024) of the total sample data. One of the 12 papers commented on the analysis of accuracy of the tool used. Multiple methods are used for sentiment analysis of tweets in the health care setting. These range from self-produced basic categorizations to more complex and expensive commercial software. The open source and commercial methods are developed on product reviews and generic social media messages. None of these methods have been extensively tested against a corpus of health care messages to check their accuracy. This study suggests that there is a need for an accurate and tested tool for sentiment analysis of tweets trained using a health care setting-specific corpus of manually annotated tweets first. ©Sunir Gohil, Sabine Vuik, Ara Darzi. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 23.04.2018.
Dynamics of Complexity and Accuracy: A Longitudinal Case Study of Advanced Untutored Development
ERIC Educational Resources Information Center
Polat, Brittany; Kim, Youjin
2014-01-01
This longitudinal case study follows a dynamic systems approach to investigate an under-studied research area in second language acquisition, the development of complexity and accuracy for an advanced untutored learner of English. Using the analytical tools of dynamic systems theory (Verspoor et al. 2011) within the framework of complexity,…
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
Synthetic Defects for Vibrothermography
NASA Astrophysics Data System (ADS)
Renshaw, Jeremy; Holland, Stephen D.; Thompson, R. Bruce; Eisenmann, David J.
2010-02-01
Synthetic defects are an important tool used for characterizing the performance of nondestructive evaluation techniques. Viscous material-filled synthetic defects were developed for use in vibrothermography (also known as sonic IR) as a tool to improve inspection accuracy and reliability. This paper describes how the heat-generation response of these VMF synthetic defects is similar to the response of real defects. It also shows how VMF defects can be applied to improve inspection accuracy for complex industrial parts and presents a study of their application in an aircraft engine stator vane.
A call for benchmarking transposable element annotation methods.
Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu
2015-01-01
DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.
NASA Astrophysics Data System (ADS)
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Portnoy, Galina A; Haskell, Sally G; King, Matthew W; Maskin, Rachel; Gerber, Megan R; Iverson, Katherine M
2018-06-06
Veterans are at heightened risk for perpetrating intimate partner violence (IPV), yet there is limited evidence to inform practice and policy for the detection of IPV perpetration. The present study evaluated the accuracy and acceptability of a potential IPV perpetration screening tool for use with women veterans. A national sample of women veterans completed a 2016 web-based survey that included a modified 5-item Extended-Hurt/Insult/Threaten/Scream (Modified E-HITS) and the Revised Conflict Tactics Scales (CTS-2). Items also assessed women's perceptions of the acceptability and appropriateness of the modified E-HITS questions for use in healthcare settings. Accuracy statistics, including sensitivity and specificity, were calculated using the CTS-2 as the reference standard. Primary measures included the Modified E-HITS (index test), CTS-2 (reference standard), and items assessing acceptability. This study included 187 women, of whom 31 women veterans (16.6%) reported past-6-month IPV perpetration on the CTS-2. The Modified E-HITS demonstrated good overall accuracy (area under the curve, 0.86; 95% confidence interval, 0.78-0.94). In addition, the majority of women perceived the questions to be acceptable and appropriate. Findings demonstrate that the Modified E-HITS is promising as a low-burden tool for detecting of IPV perpetration among women veterans. This tool may help the Veterans Health Administration and other health care providers detect IPV perpetration and offer appropriate referrals for comprehensive assessment and services. Published by Elsevier Inc.
Fall Risk Assessment Tools for Elderly Living in the Community: Can We Do Better?
Palumbo, Pierpaolo; Palmerini, Luca; Bandinelli, Stefania; Chiari, Lorenzo
2015-01-01
Falls are a common, serious threat to the health and self-confidence of the elderly. Assessment of fall risk is an important aspect of effective fall prevention programs. In order to test whether it is possible to outperform current prognostic tools for falls, we analyzed 1010 variables pertaining to mobility collected from 976 elderly subjects (InCHIANTI study). We trained and validated a data-driven model that issues probabilistic predictions about future falls. We benchmarked the model against other fall risk indicators: history of falls, gait speed, Short Physical Performance Battery (Guralnik et al. 1994), and the literature-based fall risk assessment tool FRAT-up (Cattelani et al. 2015). Parsimony in the number of variables included in a tool is often considered a proxy for ease of administration. We studied how constraints on the number of variables affect predictive accuracy. The proposed model and FRAT-up both attained the same discriminative ability; the area under the Receiver Operating Characteristic (ROC) curve (AUC) for multiple falls was 0.71. They outperformed the other risk scores, which reported AUCs for multiple falls between 0.64 and 0.65. Thus, it appears that both data-driven and literature-based approaches are better at estimating fall risk than commonly used fall risk indicators. The accuracy-parsimony analysis revealed that tools with a small number of predictors (~1-5) were suboptimal. Increasing the number of variables improved the predictive accuracy, reaching a plateau at ~20-30, which we can consider as the best trade-off between accuracy and parsimony. Obtaining the values of these ~20-30 variables does not compromise usability, since they are usually available in comprehensive geriatric assessments.
The Efficacy of Violence Prediction: A Meta-Analytic Comparison of Nine Risk Assessment Tools
ERIC Educational Resources Information Center
Yang, Min; Wong, Stephen C. P.; Coid, Jeremy
2010-01-01
Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their…
NASA Astrophysics Data System (ADS)
Maqbool, Fawad; Bambach, Markus
2017-10-01
Incremental sheet forming (ISF) is a manufacturing process most suitable for small-batch production of sheet metal parts. In ISF, a CNC-controlled tool moves over the sheet metal, following a specified contour to form a part of the desired geometry. This study focuses on one of the dominant process limitations associated with the ISF, i.e., the limited geometrical accuracy. In this regard, a case study is performed which shows that increased geometrical accuracy of the formed part can be achieved by a using stress-relief annealing before unclamping. To keep the tooling costs low, a modular die design consisting of a stiff metal frame and inserts made from inexpensive plastics (Sika®) were devised. After forming, the plastics inserts are removed. The metal frame supports the part during stress-relief annealing. Finite Element (FE) simulations of the manufacturing process are performed. Due to the residual stresses induced during the forming, the geometry of the formed part, from FE simulation and the actual manufacturing process, shows severe distortion upon unclamping the part. Stress relief annealing of the formed part under partial constraints exerted by the tool frame shows that a part with high geometrical accuracy can be obtained.
Accuracy of a Screening Tool for Early Identification of Language Impairment
ERIC Educational Resources Information Center
Uilenburg, Noëlle; Wiefferink, Karin; Verkerk, Paul; van Denderen, Margot; van Schie, Carla; Oudesluys-Murphy, Ann-Marie
2018-01-01
Purpose: A screening tool called the "VTO Language Screening Instrument" (VTO-LSI) was developed to enable more uniform and earlier detection of language impairment. This report, consisting of 2 retrospective studies, focuses on the effects of using the VTO-LSI compared to regular detection procedures. Method: Study 1 retrospectively…
Evidence on the Effectiveness of Comprehensive Error Correction in Second Language Writing
ERIC Educational Resources Information Center
Van Beuningen, Catherine G.; De Jong, Nivja H.; Kuiken, Folkert
2012-01-01
This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners' written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In addition, we tested Truscott's (e.g., 2001, 2007) claims…
Classification accuracy for stratification with remotely sensed data
Raymond L. Czaplewski; Paul L. Patterson
2003-01-01
Tools are developed that help specify the classification accuracy required from remotely sensed data. These tools are applied during the planning stage of a sample survey that will use poststratification, prestratification with proportional allocation, or double sampling for stratification. Accuracy standards are developed in terms of an âerror matrix,â which is...
Simulation of seagrass bed mapping by satellite images based on the radiative transfer model
NASA Astrophysics Data System (ADS)
Sagawa, Tatsuyuki; Komatsu, Teruhisa
2015-06-01
Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.
de Ruiter, C. M.; van der Veer, C.; Leeflang, M. M. G.; Deborggraeve, S.; Lucas, C.
2014-01-01
Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. PMID:24829226
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
NASA Astrophysics Data System (ADS)
Bratic, G.; Brovelli, M. A.; Molinari, M. E.
2018-04-01
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
NASA Astrophysics Data System (ADS)
Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.
2018-03-01
In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Shahar, Suzana; Abdul Manaf, Zahara; Mohd Nordin, Nor Azlin; Susetyowati, Susetyowati
2017-01-01
Although nutritional screening and dietary monitoring in clinical settings are important, studies on related user satisfaction and cost benefit are still lacking. This study aimed to: (1) elucidate the cost of implementing a newly developed dietary monitoring tool, the Pictorial Dietary Assessment Tool (PDAT); and (2) investigate the accuracy of estimation and satisfaction of healthcare staff after the use of the PDAT. A cross-over intervention study was conducted among 132 hospitalized patients with diabetes. Cost and time for the implementation of PDAT in comparison to modified Comstock was estimated using the activity-based costing approach. Accuracy was expressed as the percentages of energy and protein obtained by both methods, which were within 15% and 30%, respectively, of those obtained by the food weighing. Satisfaction of healthcare staff was measured using a standardized questionnaire. Time to complete the food intake recording of patients using PDAT (2.31 ± 0.70 min) was shorter than when modified Comstock (3.53 ± 1.27 min) was used (p < 0.001). Overall cost per patient was slightly higher for PDAT (United States Dollar 0.27 ± 0.02) than for modified Comstock (USD 0.26 ± 0.04 (p < 0.05)). The accuracy of energy intake estimated by modified Comstock was 10% lower than that of PDAT. There was poorer accuracy of protein intake estimated by modified Comstock (<40%) compared to that estimated by the PDAT (>71%) (p < 0.05). Mean user satisfaction of healthcare staff was significantly higher for PDAT than that for modified Comstock (p < 0.05). PDAT requires a shorter time to be completed and was rated better than modified Comstock. PMID:29283401
Lobchuk, Michelle; Halas, Gayle; West, Christina; Harder, Nicole; Tursunova, Zulfiya; Ramraj, Chantal
2016-11-01
Stressed family carers engage in health-risk behaviours that can lead to chronic illness. Innovative strategies are required to bolster empathic dialogue skills that impact nursing student confidence and sensitivity in meeting carers' wellness needs. To report on the development and evaluation of a promising empathy-related video-feedback intervention and its impact on student empathic accuracy on carer health risk behaviours. A pilot quasi-experimental design study with eight pairs of 3rd year undergraduate nursing students and carers. Students participated in perspective-taking instructional and practice sessions, and a 10-minute video-recorded dialogue with carers followed by a video-tagging task. Quantitative and qualitative approaches helped us to evaluate the recruitment protocol, capture participant responses to the intervention and study tools, and develop a tool to assess student empathic accuracy. The instructional and practice sessions increased student self-awareness of biases and interest in learning empathy by video-tagging feedback. Carers felt that students were 'non-judgmental', inquisitive, and helped them to 'gain new insights' that fostered ownership to change their health-risk behaviour. There was substantial Fleiss Kappa agreement among four raters across five dyads and 67 tagged instances. In general, students and carers evaluated the intervention favourably. The results suggest areas of improvement to the recruitment protocol, perspective-taking instructions, video-tagging task, and empathic accuracy tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analytical and Clinical Performance of Blood Glucose Monitors
Boren, Suzanne Austin; Clarke, William L.
2010-01-01
Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171
Foster, E; Matthews, J N S; Lloyd, J; Marshall, L; Mathers, J C; Nelson, M; Barton, K L; Wrieden, W L; Cornelissen, P; Harris, J; Adamson, A J
2008-01-01
A number of methods have been developed to assist subjects in providing an estimate of portion size but their application in improving portion size estimation by children has not been investigated systematically. The aim was to develop portion size assessment tools for use with children and to assess the accuracy of children's estimates of portion size using the tools. The tools were food photographs, food models and an interactive portion size assessment system (IPSAS). Children (n 201), aged 4-16 years, were supplied with known quantities of food to eat, in school. Food leftovers were weighed. Children estimated the amount of each food using each tool, 24 h after consuming the food. The age-specific portion sizes represented were based on portion sizes consumed by children in a national survey. Significant differences were found between the accuracy of estimates using the three tools. Children of all ages performed well using the IPSAS and food photographs. The accuracy and precision of estimates made using the food models were poor. For all tools, estimates of the amount of food served were more accurate than estimates of the amount consumed. Issues relating to reporting of foods left over which impact on estimates of the amounts of foods actually consumed require further study. The IPSAS has shown potential for assessment of dietary intake with children. Before practical application in assessment of dietary intake of children the tool would need to be expanded to cover a wider range of foods and to be validated in a 'real-life' situation.
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.
Machine tools error characterization and compensation by on-line measurement of artifact
NASA Astrophysics Data System (ADS)
Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili
2009-11-01
Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.
Mirea, Oana; Pagourelias, Efstathios D; Duchenne, Jurgen; Bogaert, Jan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe
2018-01-01
The purpose of this study was to compare the accuracy of vendor-specific and independent strain analysis tools to detect regional myocardial function abnormality in a clinical setting. Speckle tracking echocardiography has been considered a promising tool for the quantitative assessment of regional myocardial function. However, the potential differences among speckle tracking software with regard to their accuracy in identifying regional abnormality has not been studied extensively. Sixty-three subjects (5 healthy volunteers and 58 patients) were examined with 7 different ultrasound machines during 5 days. All patients had experienced a previous myocardial infarction, which was characterized by cardiac magnetic resonance with late gadolinium enhancement. Segmental peak systolic (PS), end-systolic (ES) and post-systolic strain (PSS) measurements were obtained with 6 vendor-specific software tools and 2 independent strain analysis tools. Strain parameters were compared between fully scarred and scar-free segments. Receiver-operating characteristic curves testing the ability of strain parameters and derived indexes to discriminate between these segments were compared among vendors. The average strain values calculated for normal segments ranged from -15.1% to -20.7% for PS, -14.9% to -20.6% for ES, and -16.1% to -21.4% for PSS. Significantly lower values of strain (p < 0.05) were found in segments with transmural scar by all vendors, with values ranging from -7.4% to -11.1% for PS, -7.7% to -10.8% for ES, and -10.5% to -14.3% for PSS. Accuracy in identifying transmural scar ranged from acceptable to excellent (area under the curve 0.74 to 0.83 for PS and ES and 0.70 to 0.78 for PSS). Significant differences were found among vendors (p < 0.05). All vendors had a significantly lower accuracy to detect scars in the basal segments compared with scars in the apex (p < 0.05). The accuracy of identifying regional abnormality differs significantly among vendors. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Near-infrared spectroscopy (NIRS) was recently applied to age-grade and differentiate laboratory reared Anopheles gambiae sensu strico and Anopheles arabiensis sibling species of Anopheles gambiae sensu lato. In this study, we report further on the accuracy of this tool in simultaneously estimating ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bono, M J; Hibbard, R L
2005-12-05
A tool holder was designed to facilitate the machining of precision meso-scale components with complex three-dimensional shapes with sub-{micro}m accuracy on a four-axis lathe. A four-axis lathe incorporates a rotary table that allows the cutting tool to swivel with respect to the workpiece to enable the machining of complex workpiece forms, and accurately machining complex meso-scale parts often requires that the cutting tool be aligned precisely along the axis of rotation of the rotary table. The tool holder designed in this study has greatly simplified the process of setting the tool in the correct location with sub-{micro}m precision. The toolmore » holder adjusts the tool position using flexures that were designed using finite element analyses. Two flexures adjust the lateral position of the tool to align the center of the nose of the tool with the axis of rotation of the B-axis, and another flexure adjusts the height of the tool. The flexures are driven by manual micrometer adjusters, each of which provides a minimum increment of motion of 20 nm. This tool holder has simplified the process of setting a tool with sub-{micro}m accuracy, and it has significantly reduced the time required to set a tool.« less
Triage tools for detecting cervical spine injury in pediatric trauma patients.
Slaar, Annelie; Fockens, M M; Wang, Junfeng; Maas, Mario; Wilson, David J; Goslings, J Carel; Schep, Niels Wl; van Rijn, Rick R
2017-12-07
Pediatric cervical spine injury (CSI) after blunt trauma is rare. Nonetheless, missing these injuries can have severe consequences. To prevent the overuse of radiographic imaging, two clinical decision tools have been developed: The National Emergency X-Radiography Utilization Study (NEXUS) criteria and the Canadian C-spine Rule (CCR). Both tools are proven to be accurate in deciding whether or not diagnostic imaging is needed in adults presenting for blunt trauma screening at the emergency department. However, little information is known about the accuracy of these triage tools in a pediatric population. To determine the diagnostic accuracy of the NEXUS criteria and the Canadian C-spine Rule in a pediatric population evaluated for CSI following blunt trauma. We searched the following databases to 24 February 2015: CENTRAL, MEDLINE, MEDLINE Non-Indexed and In-Process Citations, PubMed, Embase, Science Citation Index, ProQuest Dissertations & Theses Database, OpenGrey, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Health Technology Assessment, and the Aggressive Research Intelligence Facility. We included all retrospective and prospective studies involving children following blunt trauma that evaluated the accuracy of the NEXUS criteria, the Canadian C-spine Rule, or both. Plain radiography, computed tomography (CT) or magnetic resonance imaging (MRI) of the cervical spine, and follow-up were considered as adequate reference standards. Two review authors independently assessed the quality of included studies using the QUADAS-2 checklists. They extracted data on study design, patient characteristics, inclusion and exclusion criteria, clinical parameters, target condition, reference standard, and the diagnostic two-by-two table. We calculated and plotted sensitivity, specificity and negative predictive value in ROC space, and constructed forest plots for visual examination of variation in test accuracy. Three cohort studies were eligible for analysis, including 3380 patients ; 96 children were diagnosed with CSI. One study evaluated the accuracy of the Canadian C-spine Rule and the NEXUS criteria, and two studies evaluated the accuracy of the NEXUS criteria. The studies were of moderate quality. Due to the small number of included studies and the diverse outcomes of those studies, we could not describe a pooled estimate for the diagnostic test accuracy. The sensitivity of the NEXUS criteria of the individual studies was 0.57 (95% confidence interval (CI) 0.18 to 0.90), 0.98 (95% CI 0.91 to 1.00) and 1.00 (95% CI 0.88 to 1.00). The specificity of the NEXUS criteria was 0.35 (95% CI 0.25 to 0.45), 0.54 (95% CI 0.45 to 0.62) and 0.2 (95% CI 0.18 to 0.21). For the Canadian C-spine Rule the sensitivity was 0.86 (95% CI 0.42 to 1.00) and specificity was 0.15 (95% CI 0.08 to 0.23). Since the quantity of the data was small we were not able to investigate heterogeneity. There are currently few studies assessing the diagnostic test accuracy of the NEXUS criteria and CCR in children. At the moment, there is not enough evidence to determine the accuracy of the Canadian C-spine Rule to detect CSI in pediatric trauma patients following blunt trauma. The confidence interval of the sensitivity of the NEXUS criteria between the individual studies showed a wide range, with a lower limit varying from 0.18 to 0.91 with a total of four false negative test results, meaning that if physicians use the NEXUS criteria in children, there is a chance of missing CSI. Since missing CSI could have severe consequences with the risk of significant morbidity, we consider that the NEXUS criteria are at best a guide to clinical assessment, with current evidence not supporting strict or protocolized adoption of the tool into pediatric trauma care. Moreover, we have to keep in mind that the sensitivity differs among several studies, and individual confidence intervals of these studies show a wide range. Our main conclusion is therefore that additional well-designed studies with large sample sizes are required to better evaluate the accuracy of the NEXUS criteria or the Canadian C-spine Rule, or both, in order to determine whether they are appropriate triage tools for the clearance of the cervical spine in children following blunt trauma.
de Ruiter, C M; van der Veer, C; Leeflang, M M G; Deborggraeve, S; Lucas, C; Adams, E R
2014-09-01
Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Stott, Joshua; Scior, Katrina; Mandy, William; Charlesworth, Georgina
2017-01-01
Scores on cognitive screening tools for dementia are associated with premorbid IQ. It has been suggested that screening scores should be adjusted accordingly. However, no study has examined whether premorbid IQ variation affects screening accuracy. To investigate whether the screening accuracy of a widely used cognitive screening tool for dementia, the Addenbrooke's cognitive examination-III (ACE-III), is improved by adjusting for premorbid IQ. 171 UK based adults (96 memory service attendees diagnosed with dementia and 75 healthy volunteers over the age of 65 without subjective memory impairments) completed the ACE-III and the Test of Premorbid Function (TOPF). The difference in screening performance between the ACE-III alone and the ACE-III adjusted for TOPF was assessed against a reference standard; the presence or absence of a diagnosis of dementia (Alzheimer's disease, vascular dementia, or others). Logistic regression and receiver operating curve analyses indicated that the ACE-III has excellent screening accuracy (93% sensitivity, 94% specificity) in distinguishing those with and without a dementia diagnosis. Although ACE-III scores were associated with TOPF scores, TOPF scores may be affected by having dementia and screening accuracy was not improved by accounting for premorbid IQ, age, or years of education. ACE-III screening accuracy is high and screening performance is robust to variation in premorbid IQ, age, and years of education. Adjustment of ACE-III cut-offs for premorbid IQ is not recommended in clinical practice. The analytic strategy used here may be useful to assess the impact of premorbid IQ on other screening tools.
Tool wear compensation scheme for DTM
NASA Astrophysics Data System (ADS)
Sandeep, K.; Rao, U. S.; Balasubramaniam, R.
2018-04-01
This paper is aimed to monitor tool wear in diamond turn machining (DTM), assess effects of tool wear on accuracies of the machined component, and develop compensation methodology to enhance size and shape accuracies of a hemispherical cup. In order to find change in the centre and radius of tool with increasing wear of tool, a MATLAB program is used. In practice, x-offsets are readjusted by DTM operator for desired accuracy in the cup and the results of theoretical model show that change in radius and z-offset are insignificant however x-offset is proportional to the tool wear and this is what assumed while resetting tool offset. Since we could not measure the profile of tool; therefore we modeled our program for cup profile data. If we assume no error due to slide and spindle of DTM then any wear in the tool will be reflected in the cup profile. As the cup data contains surface roughness, therefore random noise similar to surface waviness is added. It is observed that surface roughness affects the centre and radius but pattern of shifting of centre with increase in wear of tool remains similar to the ideal condition, i.e. without surface roughness.
Marufu, Takawira C; Mannings, Alexa; Moppett, Iain K
2015-12-01
Accurate peri-operative risk prediction is an essential element of clinical practice. Various risk stratification tools for assessing patients' risk of mortality or morbidity have been developed and applied in clinical practice over the years. This review aims to outline essential characteristics (predictive accuracy, objectivity, clinical utility) of currently available risk scoring tools for hip fracture patients. We searched eight databases; AMED, CINHAL, Clinical Trials.gov, Cochrane, DARE, EMBASE, MEDLINE and Web of Science for all relevant studies published until April 2015. We included published English language observational studies that considered the predictive accuracy of risk stratification tools for patients with fragility hip fracture. After removal of duplicates, 15,620 studies were screened. Twenty-nine papers met the inclusion criteria, evaluating 25 risk stratification tools. Risk stratification tools considered in more than two studies were; ASA, CCI, E-PASS, NHFS and O-POSSUM. All tools were moderately accurate and validated in multiple studies; however there are some limitations to consider. The E-PASS and O-POSSUM are comprehensive but complex, and require intraoperative data making them a challenge for use on patient bedside. The ASA, CCI and NHFS are simple, easy and inexpensive using routinely available preoperative data. Contrary to the ASA and CCI which has subjective variables in addition to other limitations, the NHFS variables are all objective. In the search for a simple and inexpensive, easy to calculate, objective and accurate tool, the NHFS may be the most appropriate of the currently available scores for hip fracture patients. However more studies need to be undertaken before it becomes a national hip fracture risk stratification or audit tool of choice. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Method and apparatus for characterizing and enhancing the dynamic performance of machine tools
Barkman, William E; Babelay, Jr., Edwin F
2013-12-17
Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.
Zeng, Xiantao; Zhang, Yonggang; Kwong, Joey S W; Zhang, Chao; Li, Sheng; Sun, Feng; Niu, Yuming; Du, Liang
2015-02-01
To systematically review the methodological assessment tools for pre-clinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline. We searched PubMed, the Cochrane Handbook for Systematic Reviews of Interventions, Joanna Briggs Institute (JBI) Reviewers Manual, Centre for Reviews and Dissemination, Critical Appraisal Skills Programme (CASP), Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Clinical Excellence (NICE) up to May 20th, 2014. Two authors selected studies and extracted data; quantitative analysis was performed to summarize the characteristics of included tools. We included a total of 21 assessment tools for analysis. A number of tools were developed by academic organizations, and some were developed by only a small group of researchers. The JBI developed the highest number of methodological assessment tools, with CASP coming second. Tools for assessing the methodological quality of randomized controlled studies were most abundant. The Cochrane Collaboration's tool for assessing risk of bias is the best available tool for assessing RCTs. For cohort and case-control studies, we recommend the use of the Newcastle-Ottawa Scale. The Methodological Index for Non-Randomized Studies (MINORS) is an excellent tool for assessing non-randomized interventional studies, and the Agency for Healthcare Research and Quality (ARHQ) methodology checklist is applicable for cross-sectional studies. For diagnostic accuracy test studies, the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool is recommended; the SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool is available for assessing animal studies; Assessment of Multiple Systematic Reviews (AMSTAR) is a measurement tool for systematic reviews/meta-analyses; an 18-item tool has been developed for appraising case series studies, and the Appraisal of Guidelines, Research and Evaluation (AGREE)-II instrument is widely used to evaluate clinical practice guidelines. We have successfully identified a variety of methodological assessment tools for different types of study design. However, further efforts in the development of critical appraisal tools are warranted since there is currently a lack of such tools for other fields, e.g. genetic studies, and some existing tools (nested case-control studies and case reports, for example) are in need of updating to be in line with current research practice and rigor. In addition, it is very important that all critical appraisal tools remain subjective and performance bias is effectively avoided. © 2015 Chinese Cochrane Center, West China Hospital of Sichuan University and Wiley Publishing Asia Pty Ltd.
Family-Based Benchmarking of Copy Number Variation Detection Software.
Nutsua, Marcel Elie; Fischer, Annegret; Nebel, Almut; Hofmann, Sylvia; Schreiber, Stefan; Krawczak, Michael; Nothnagel, Michael
2015-01-01
The analysis of structural variants, in particular of copy-number variations (CNVs), has proven valuable in unraveling the genetic basis of human diseases. Hence, a large number of algorithms have been developed for the detection of CNVs in SNP array signal intensity data. Using the European and African HapMap trio data, we undertook a comparative evaluation of six commonly used CNV detection software tools, namely Affymetrix Power Tools (APT), QuantiSNP, PennCNV, GLAD, R-gada and VEGA, and assessed their level of pair-wise prediction concordance. The tool-specific CNV prediction accuracy was assessed in silico by way of intra-familial validation. Software tools differed greatly in terms of the number and length of the CNVs predicted as well as the number of markers included in a CNV. All software tools predicted substantially more deletions than duplications. Intra-familial validation revealed consistently low levels of prediction accuracy as measured by the proportion of validated CNVs (34-60%). Moreover, up to 20% of apparent family-based validations were found to be due to chance alone. Software using Hidden Markov models (HMM) showed a trend to predict fewer CNVs than segmentation-based algorithms albeit with greater validity. PennCNV yielded the highest prediction accuracy (60.9%). Finally, the pairwise concordance of CNV prediction was found to vary widely with the software tools involved. We recommend HMM-based software, in particular PennCNV, rather than segmentation-based algorithms when validity is the primary concern of CNV detection. QuantiSNP may be used as an additional tool to detect sets of CNVs not detectable by the other tools. Our study also reemphasizes the need for laboratory-based validation, such as qPCR, of CNVs predicted in silico.
Oetting, Janna B
2018-04-05
Although the 5 studies presented within this clinical forum include children who differ widely in locality, language learning profile, and age, all were motivated by a desire to improve the accuracy at which developmental language disorder is identified within linguistically diverse schools. The purpose of this prologue is to introduce the readers to a conceptual framework that unites the studies while also highlighting the approaches and methods each research team is pursuing to improve assessment outcomes within their respective linguistically diverse community. A disorder within diversity framework is presented to replace previous difference vs. disorder approaches. Then, the 5 studies within the forum are reviewed by clinical question, type of tool(s), and analytical approach. Across studies of different linguistically diverse groups, research teams are seeking answers to similar questions about child language screening and diagnostic practices, using similar analytical approaches to answer their questions, and finding promising results with tools focused on morphosyntax. More studies that are modeled after or designed to extend those in this forum are needed to improve the accuracy at which developmental language disorder is identified.
Systematic review of fall risk screening tools for older patients in acute hospitals.
Matarese, Maria; Ivziku, Dhurata; Bartolozzi, Francesco; Piredda, Michela; De Marinis, Maria Grazia
2015-06-01
To determine the most accurate fall risk screening tools for predicting falls among patients aged 65 years or older admitted to acute care hospitals. Falls represent a serious problem in older inpatients due to the potential physical, social, psychological and economic consequences. Older inpatients present with risk factors associated with age-related physiological and psychological changes as well as multiple morbidities. Thus, fall risk screening tools for older adults should include these specific risk factors. There are no published recommendations addressing what tools are appropriate for older hospitalized adults. Systematic review. MEDLINE, CINAHL and Cochrane electronic databases were searched between January 1981-April 2013. Only prospective validation studies reporting sensitivity and specificity values were included. Recommendations of the Cochrane Handbook of Diagnostic Test Accuracy Reviews have been followed. Three fall risk assessment tools were evaluated in seven articles. Due to the limited number of studies, meta-analysis was carried out only for the STRATIFY and Hendrich Fall Risk Model II. In the combined analysis, the Hendrich Fall Risk Model II demonstrated higher sensitivity than STRATIFY, while the STRATIFY showed higher specificity. In both tools, the Youden index showed low prognostic accuracy. The identified tools do not demonstrate predictive values as high as needed for identifying older inpatients at risk for falls. For this reason, no tool can be recommended for fall detection. More research is needed to evaluate fall risk screening tools for older inpatients. © 2014 John Wiley & Sons Ltd.
A method which can enhance the optical-centering accuracy
NASA Astrophysics Data System (ADS)
Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua
2014-09-01
Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.
ERIC Educational Resources Information Center
Glazier, Kimberly
2014-01-01
Objective: The study aimed to increase awareness of OCD symptomatology among doctoral students in clinical, counseling and school psychology through the implementation of a comprehensive OCD education-based training tool. Method: The program directors across all APA-accredited clinical, counseling, and school psychology doctoral graduate programs…
Armon-Lotem, Sharon; Meir, Natalia
2016-11-01
Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language Therapists.
SU-F-T-405: Development of a Rapid Cardiac Contouring Tool Using Landmark-Driven Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, C; Jung, J; Mosher, E
2016-06-15
Purpose: This study aims to develop a tool to rapidly delineate cardiac substructures for use in dosimetry for large-scale clinical trial or epidemiological investigations. The goal is to produce a system that can semi-automatically delineate nine cardiac structures to a reasonable accuracy within a couple of minutes. Methods: The cardiac contouring tool employs a Most Similar Atlas method, where a selection criterion is used to pre-select the most similar model to the patient from a library of pre-defined atlases. Sixty contrast-enhanced cardiac computed tomography angiography (CTA) scans (30 male and 30 female) were manually contoured to serve as the atlasmore » library. For each CTA 12 structures were delineated. Kabsch algorithm was used to compute the optimum rotation and translation matrices between the patient and atlas. Minimum root mean squared distance between the patient and atlas after transformation was used to select the most-similar atlas. An initial study using 10 CTA sets was performed to assess system feasibility. Leave-one patient out method was performed, and fit criteria were calculated to evaluate the fit accuracy compared to manual contours. Results: For the pilot study, mean dice indices of .895 were achieved for the whole heart, .867 for the ventricles, and .802 for the atria. In addition, mean distance was measured via the chord length distribution (CLD) between ground truth and the atlas structures for the four coronary arteries. The mean CLD for all coronary arteries was below 14mm, with the left circumflex artery showing the best agreement (7.08mm). Conclusion: The cardiac contouring tool is able to delineate cardiac structures with reasonable accuracy in less than 90 seconds. Pilot data indicates that the system is able to delineate the whole heart and ventricles within a reasonable accuracy using even a limited library. We are extending the atlas sets to 60 adult males and females in total.« less
Screening for sepsis in general hospitalized patients: a systematic review.
Alberto, L; Marshall, A P; Walker, R; Aitken, L M
2017-08-01
Sepsis is a condition widely observed outside critical care areas. To examine the application of sepsis screening tools for early recognition of sepsis in general hospitalized patients to: (i) identify the accuracy of these tools; (ii) determine the outcomes associated with their implementation; and (iii) describe the implementation process. A systematic review method was used. PubMed, CINAHL, Cochrane, Scopus, Web of Science, and Embase databases were systematically searched for primary articles, published from January 1990 to June 2016, that investigated screening tools or alert mechanisms for early identification of sepsis in adult general hospitalized patients. The review protocol was registered with PROSPERO (CRD42016042261). More than 8000 citations were screened for eligibility after duplicates had been removed. Six articles met the inclusion criteria testing two types of sepsis screening tools. Electronic tools can capture, recognize abnormal variables, and activate an alert in real time. However, accuracy of these tools was inconsistent across studies with only one demonstrating high specificity and sensitivity. Paper-based, nurse-led screening tools appear to be more sensitive in the identification of septic patients but were only studied in small samples and particular populations. The process of care measures appears to be enhanced; however, demonstrating improved outcomes is more challenging. Implementation details are rarely reported. Heterogeneity of studies prevented meta-analysis. Clinicians, researchers and health decision-makers should consider these findings and limitations when implementing screening tools, research or policy on sepsis recognition in general hospitalized patients. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Assessment of neuropsychiatric symptoms in dementia: toward improving accuracy
Stella, Florindo
2013-01-01
The issue of this article concerned the discussion about tools frequently used tools for assessing neuropsychiatric symptoms of patients with dementia, particularly Alzheimer's disease. The aims were to discuss the main tools for evaluating behavioral disturbances, and particularly the accuracy of the Neuropsychiatric Inventory – Clinician Rating Scale (NPI-C). The clinical approach to and diagnosis of neuropsychiatric syndromes in dementia require suitable accuracy. Advances in the recognition and early accurate diagnosis of psychopathological symptoms help guide appropriate pharmacological and non-pharmacological interventions. In addition, recommended standardized and validated measurements contribute to both scientific research and clinical practice. Emotional distress, caregiver burden, and cognitive impairment often experienced by elderly caregivers, may affect the quality of caregiver reports. The clinician rating approach helps attenuate these misinterpretations. In this scenario, the NPI-C is a promising and versatile tool for assessing neuropsychiatric syndromes in dementia, offering good accuracy and high reliability, mainly based on the diagnostic impression of the clinician. This tool can provide both strategies: a comprehensive assessment of neuropsychiatric symptoms in dementia or the investigation of specific psychopathological syndromes such as agitation, depression, anxiety, apathy, sleep disorders, and aberrant motor disorders, among others. PMID:29213846
Toward an Attention-Based Diagnostic Tool for Patients With Locked-in Syndrome.
Lesenfants, Damien; Habbal, Dina; Chatelle, Camille; Soddu, Andrea; Laureys, Steven; Noirhomme, Quentin
2018-03-01
Electroencephalography (EEG) has been proposed as a supplemental tool for reducing clinical misdiagnosis in severely brain-injured populations helping to distinguish conscious from unconscious patients. We studied the use of spectral entropy as a measure of focal attention in order to develop a motor-independent, portable, and objective diagnostic tool for patients with locked-in syndrome (LIS), answering the issues of accuracy and training requirement. Data from 20 healthy volunteers, 6 LIS patients, and 10 patients with a vegetative state/unresponsive wakefulness syndrome (VS/UWS) were included. Spectral entropy was computed during a gaze-independent 2-class (attention vs rest) paradigm, and compared with EEG rhythms (delta, theta, alpha, and beta) classification. Spectral entropy classification during the attention-rest paradigm showed 93% and 91% accuracy in healthy volunteers and LIS patients respectively. VS/UWS patients were at chance level. EEG rhythms classification reached a lower accuracy than spectral entropy. Resting-state EEG spectral entropy could not distinguish individual VS/UWS patients from LIS patients. The present study provides evidence that an EEG-based measure of attention could detect command-following in patients with severe motor disabilities. The entropy system could detect a response to command in all healthy subjects and LIS patients, while none of the VS/UWS patients showed a response to command using this system.
1985-10-01
83K0385 FINAL REPORT D Vol. 4 00 THERMAL EFFECTS ON THE ACCURACY OF LD NUME" 1ICALLY CONTROLLED MACHINE TOOLS PREPARED BY I Raghunath Venugopal and M...OF NUMERICALLY CONTROLLED MACHINE TOOLS 12 PERSONAL AJ’HOR(S) Venunorial, Raghunath and M. M. Barash 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF...TOOLS Prepared by Raghunath Venugopal and M. M. Barash Accesion For Unannounced 0 Justification ........................................... October 1085
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Fuzzy regression modeling for tool performance prediction and degradation detection.
Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L
2010-10-01
In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.
Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors
NASA Astrophysics Data System (ADS)
Holmes, C. S.; Headley, M.; Hart, P. W.
2017-08-01
Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.
The use of tools for learning science in small groups
NASA Astrophysics Data System (ADS)
Valdes, Rosa Maria
2000-10-01
"Hands-on" learning through the use of tools or manipulatives representative of science concepts has long been an important component of the middle school science curriculum. However, scarce research exists on the impact of tool use on learning of science concepts, particularly on the processes involved in such learning. This study investigated how the use of tools by students engaged in small group discussion about the concept of electrical resistance and the explanations that accompany such use leads to improved understandings of the concept. Specifically, the main hypothesis of the study was that students who observe explanations by their high-ability peers accompanied by accurate tool use and who are highly engaged in these explanations would show learning gains. Videotaped interactions of students working in small groups to solve tasks on electricity were coded using scales that measured the accuracy of the tool use, the accuracy of the explanations presented, and the level of engagement of target students. The data of 48 students whose knowledge of the concept of resistance was initially low and who also were determined to be low achievers as shown by their scores on a set of pretest, was analyzed. Quantitative and qualitative analyses showed that students who observed their peers give explanations using tools and who were engaged at least moderately made gains in their understandings of resistance. Specifically, the results of regression analyses showed that both the level of accuracy of a high-ability peer's explanation and the target student's level of engagement in the explanation significantly predicted target students' outcome scores. The number of presentations offered by a high-ability peer also significantly predicted outcome scores. Case study analyses of six students found that students who improved their scores the most from pretest to posttest had high-ability peers who tended to be verbal and who gave numerous explanations, whereas students who improved the least had high-ability peers who gave no explanations at all. Important implications of this study for teaching are that (1) teachers should group students heterogeneously and should monitor students' small groups to insure that students are producing content-oriented discussion, and (2) students should be allowed to manipulate tools that allow experimentation as students build understandings and promote communication of abstract ideas.
ERIC Educational Resources Information Center
Borgmeier, Chris; Horner, Robert H.
2006-01-01
Faced with limited resources, schools require tools that increase the accuracy and efficiency of functional behavioral assessment. Yarbrough and Carr (2000) provided evidence that informant confidence ratings of the likelihood of problem behavior in specific situations offered a promising tool for predicting the accuracy of function-based…
Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S
2018-05-01
We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Zhe; Peng, M. G.; Tu, Lin Hsin; Lee, Cedric; Lin, J. K.; Jan, Jian Feng; Yin, Alb; Wang, Pei
2006-10-01
Nowadays, most foundries have paid more and more attention in order to reduce the CD width. Although the lithography technologies have developed drastically, mask data accuracy is still a big challenge than before. Besides, mask (reticle) price also goes up drastically such that data accuracy needs more special treatments.We've developed a system called eFDMS to guarantee the mask data accuracy. EFDMS is developed to do the automatic back-check of mask tooling database and the data transmission of mask tooling. We integrate our own EFDMS systems to engage with the standard mask tooling system K2 so that the upriver and the downriver processes of the mask tooling main body K2 can perform smoothly and correctly with anticipation. The competition in IC marketplace is changing from high-tech process to lower-price gradually. How to control the reduction of the products' cost more plays a significant role in foundries. Before the violent competition's drawing nearer, we should prepare the cost task ahead of time.
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
NASA Astrophysics Data System (ADS)
Li, Kesai; Gao, Jie; Ju, Xiaodong; Zhu, Jun; Xiong, Yanchun; Liu, Shuai
2018-05-01
This paper proposes a new tool design of ultra-deep azimuthal electromagnetic (EM) resistivity logging while drilling (LWD) for deeper geosteering and formation evaluation, which can benefit hydrocarbon exploration and development. First, a forward numerical simulation of azimuthal EM resistivity LWD is created based on the fast Hankel transform (FHT) method, and its accuracy is confirmed under classic formation conditions. Then, a reasonable range of tool parameters is designed by analyzing the logging response. However, modern technological limitations pose challenges to selecting appropriate tool parameters for ultra-deep azimuthal detection under detectable signal conditions. Therefore, this paper uses grey relational analysis (GRA) to quantify the influence of tool parameters on voltage and azimuthal investigation depth. After analyzing thousands of simulation data under different environmental conditions, the random forest is used to fit data and identify an optimal combination of tool parameters due to its high efficiency and accuracy. Finally, the structure of the ultra-deep azimuthal EM resistivity LWD tool is designed with a theoretical azimuthal investigation depth of 27.42-29.89 m in classic different isotropic and anisotropic formations. This design serves as a reliable theoretical foundation for efficient geosteering and formation evaluation in high-angle and horizontal (HA/HZ) wells in the future.
Diagnostic Tools for Acute Anterior Cruciate Ligament Injury: GNRB, Lachman Test, and Telos.
Ryu, Seung Min; Na, Ho Dong; Shon, Oog Jin
2018-06-01
The purpose of this study is to compare the accuracy of the GNRB arthrometer (Genourob), Lachman test, and Telos device (GmbH) in acute anterior cruciate ligament (ACL) injuries and to evaluate the accuracy of each diagnostic tool according to the length of time from injury to examination. From September 2015 to September 2016, 40 cases of complete ACL rupture were reviewed. We divided the time from injury to examination into three periods of 10 days each and analyzed the diagnostic tools according to the time frame. An analysis of the area under the curve (AUC) of a receiver operating characteristic curve showed that all diagnostic tools were fairly informative. The GNRB showed a higher AUC than other diagnostic tools. In 10 cases assessed within 10 days after injury, the GNRB showed statistically significant side-to-side difference in laxity (p<0.001), whereas the Telos test and Lachman test did not show significantly different laxity (p=0.541 and p=0.413, respectively). All diagnostic values of the GNRB were better than other diagnostic tools in acute ACL injuries. The GNRB was more effective in acute ACL injuries examined within 10 days of injury. The GNRB arthrometer can be a useful diagnostic tool for acute ACL injuries.
Cavalli, Eddy; Colé, Pascale; Leloup, Gilles; Poracchia-George, Florence; Sprenger-Charolles, Liliane; El Ahmadi, Abdessadek
Developmental dyslexia is a lifelong impairment affecting 5% to 10% of the population. In French-speaking countries, although a number of standardized tests for dyslexia in children are available, tools suitable to screen for dyslexia in adults are lacking. In this study, we administered the Alouette reading test to a normative sample of 164 French university students without dyslexia and a validation sample of 83 students with dyslexia. The Alouette reading test is designed to screen for dyslexia in children, since it taps skills that are typically deficient in dyslexia (i.e., phonological skills). However, the test's psychometric properties have not previously been available, and it is not standardized for adults. The results showed that, on the Alouette test, dyslexic readers were impaired on measures of accuracy, speed, and efficiency (accuracy/reading time). We also found significant correlations between the Alouette reading efficiency and phonological efficiency scores. Finally, in terms of the Alouette test, speed-accuracy trade-offs were found in both groups, and optimal cutoff scores were determined with receiver operator characteristic curves analysis, yielding excellent discriminatory power, with 83.1% sensitivity and 100% specificity for reading efficiency. Thus, this study supports the Alouette test as a sensitive and specific screening tool for adults with dyslexia.
[Intelligent systems tools in the diagnosis of acute coronary syndromes: A systemic review].
Sprockel, John; Tejeda, Miguel; Yate, José; Diaztagle, Juan; González, Enrique
2017-03-27
Acute myocardial infarction is the leading cause of non-communicable deaths worldwide. Its diagnosis is a highly complex task, for which modelling through automated methods has been attempted. A systematic review of the literature was performed on diagnostic tests that applied intelligent systems tools in the diagnosis of acute coronary syndromes. A systematic review of the literature is presented using Medline, Embase, Scopus, IEEE/IET Electronic Library, ISI Web of Science, Latindex and LILACS databases for articles that include the diagnostic evaluation of acute coronary syndromes using intelligent systems. The review process was conducted independently by 2 reviewers, and discrepancies were resolved through the participation of a third person. The operational characteristics of the studied tools were extracted. A total of 35 references met the inclusion criteria. In 22 (62.8%) cases, neural networks were used. In five studies, the performances of several intelligent systems tools were compared. Thirteen studies sought to perform diagnoses of all acute coronary syndromes, and in 22, only infarctions were studied. In 21 cases, clinical and electrocardiographic aspects were used as input data, and in 10, only electrocardiographic data were used. Most intelligent systems use the clinical context as a reference standard. High rates of diagnostic accuracy were found with better performance using neural networks and support vector machines, compared with statistical tools of pattern recognition and decision trees. Extensive evidence was found that shows that using intelligent systems tools achieves a greater degree of accuracy than some clinical algorithms or scales and, thus, should be considered appropriate tools for supporting diagnostic decisions of acute coronary syndromes. Copyright © 2017 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Accuracy of digital images in the detection of marginal microleakage: an in vitro study.
Alvarenga, Fábio Augusto; Andrade, Marcelo Ferrarezi; Pinelli, Camila; Rastelli, Alessanda Nara; Victorino, Keli Regina; Loffredo, Leonor de
2012-08-01
To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study. Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI). The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99). Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.
Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc
2013-06-01
To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.
The development of a probabilistic approach to forecast coastal change
Lentz, Erika E.; Hapke, Cheryl J.; Rosati, Julie D.; Wang, Ping; Roberts, Tiffany M.
2011-01-01
This study demonstrates the applicability of a Bayesian probabilistic model as an effective tool in predicting post-storm beach changes along sandy coastlines. Volume change and net shoreline movement are modeled for two study sites at Fire Island, New York in response to two extratropical storms in 2007 and 2009. Both study areas include modified areas adjacent to unmodified areas in morphologically different segments of coast. Predicted outcomes are evaluated against observed changes to test model accuracy and uncertainty along 163 cross-shore transects. Results show strong agreement in the cross validation of predictions vs. observations, with 70-82% accuracies reported. Although no consistent spatial pattern in inaccurate predictions could be determined, the highest prediction uncertainties appeared in locations that had been recently replenished. Further testing and model refinement are needed; however, these initial results show that Bayesian networks have the potential to serve as important decision-support tools in forecasting coastal change.
The Diagnostic Accuracy of the Berg Balance Scale in Predicting Falls.
Park, Seong-Hi; Lee, Young-Shin
2017-11-01
This study aimed to evaluate the predictive validity of the Berg Balance Scale (BBS) as a screening tool for fall risks among those with varied levels of balance. A total of 21 studies reporting predictive validity of the BBS of fall risk were meta-analyzed. With regard to the overall predictive validity of the BBS, the pooled sensitivity and specificity were 0.72 and 0.73, respectively; the accuracy curve area was 0.84. The findings showed statistical heterogeneity among studies. Among the sub-groups, the age group of those younger than 65 years, those with neuromuscular disease, those with 2+ falls, and those with a cutoff point of 45 to 49 showed better sensitivity with statistically less heterogeneity. The empirical evidence indicates that the BBS is a suitable tool to screen for the risk of falls and shows good predictability when used with the appropriate criteria and applied to those with neuromuscular disease.
Manchikanti, Laxmaiah; Malla, Yogesh; Wargo, Bradley W; Cash, Kimberly A; Pampati, Vidyasagar; Damron, Kim S; McManus, Carla D; Brandon, Doris E
2010-01-01
Therapeutic use, overuse, abuse, and diversion of controlled substances in managing chronic non-cancer pain continues to be an issue for physicians and patients. It has been stated that physicians, along with the public and federal, state, and local government; professional associations; and pharmaceutical companies all share responsibility for preventing abuse of controlled prescription drugs. The challenge is to eliminate or significantly curtail abuse of controlled prescription drugs while still assuring the proper treatment of those patients. A number of techniques, instruments, and tools have been described to monitor controlled substance use and abuse. Thus, multiple techniques and tools available for adherence monitoring include urine drug testing in conjunction with prescription monitoring programs and other screening tests. However, urine drug testing is associated with multiple methodological flaws. Multiple authors have provided conflicting results in relation to diagnostic accuracy with differing opinions about how to monitor adherence in a non-systematic fashion. Thus far, there have not been any studies systematically assessing the diagnostic accuracy of immunoassay with laboratory testing. A diagnostic accuracy study of urine drug testing. An interventional pain management practice, a specialty referral center, a private practice setting in the United States. To compare the information obtained by point of care (POC) or in-office urine drug testing (index test) to the information found when all drugs and analytes are tested by liquid chromatography tandem mass spectroscopy (LC/MS/MS) reference test in the same urine sample. The study is designed to include 1,000 patients with chronic pain receiving controlled substances. The primary outcome measure is the diagnostic accuracy. Patients will be tested for various controlled substances, including opioids, benzodiazepines, and illicit drugs. The diagnostic accuracy study is performed utilizing the Standards for Reporting of Diagnostic Accuracy Studies (STARD) initiative which established reporting guidelines for diagnostic accuracy studies to improve the quality of reporting. The prototypical flow diagram of diagnostic accuracy study as described by STARD will be utilized. Results of diagnostic accuracy and correlation of clinical factors in relation to threshold levels, prevalence of abuse, false-positives, false-negatives, influence of other drugs, and demographic characteristics will be calculated. The limitations include lack of availability of POC testing with lower cutoff levels. This article presents a protocol for a diagnostic accuracy study of urine drug testing. The protocol also will permit correlation of various clinical factors in relation to threshold levels, prevalence of abuse, false-positives, false-negatives, influence of other drugs, and demographic characteristics. NCT 01052155.
NASA Technical Reports Server (NTRS)
Gustafson, T. D.; Adams, M. S.
1973-01-01
Research was initiated to use aerial photography as an investigative tool in studies that are part of an intensive aquatic ecosystem research effort at Lake Wingra, Madison, Wisconsin. It is anticipated that photographic techniques would supply information about the growth and distribution of littoral macrophytes with efficiency and accuracy greater than conventional methods.
Detecting Diseases in Medical Prescriptions Using Data Mining Tools and Combining Techniques.
Teimouri, Mehdi; Farzadfar, Farshad; Soudi Alamdari, Mahsa; Hashemi-Meshkini, Amir; Adibi Alamdari, Parisa; Rezaei-Darzi, Ehsan; Varmaghani, Mehdi; Zeynalabedini, Aysan
2016-01-01
Data about the prevalence of communicable and non-communicable diseases, as one of the most important categories of epidemiological data, is used for interpreting health status of communities. This study aims to calculate the prevalence of outpatient diseases through the characterization of outpatient prescriptions. The data used in this study is collected from 1412 prescriptions for various types of diseases from which we have focused on the identification of ten diseases. In this study, data mining tools are used to identify diseases for which prescriptions are written. In order to evaluate the performances of these methods, we compare the results with Naïve method. Then, combining methods are used to improve the results. Results showed that Support Vector Machine, with an accuracy of 95.32%, shows better performance than the other methods. The result of Naive method, with an accuracy of 67.71%, is 20% worse than Nearest Neighbor method which has the lowest level of accuracy among the other classification algorithms. The results indicate that the implementation of data mining algorithms resulted in a good performance in characterization of outpatient diseases. These results can help to choose appropriate methods for the classification of prescriptions in larger scales.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.
Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P
2011-09-01
A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Margolis, Amanda R; Martin, Beth A; Mott, David A
2016-01-01
To determine the feasibility and fidelity of student pharmacists collecting patient medication list information using a structured interview tool and the accuracy of documenting the information. The medication lists were used by a community pharmacist to provide a targeted medication therapy management (MTM) intervention. Descriptive analysis of patient medication lists collected with telephone interviews. Ten trained student pharmacists collected the medication lists. Trained student pharmacists conducted audio-recorded telephone interviews with 80 English-speaking, community-dwelling older adults using a structured interview tool to collect and document medication lists. Feasibility was measured using the number of completed interviews, the time student pharmacists took to collect the information, and pharmacist feedback. Fidelity to the interview tool was measured by assessing student pharmacists' adherence to asking all scripted questions and probes. Accuracy was measured by comparing the audio-recorded interviews to the medication list information documented in an electronic medical record. On average, it took student pharmacists 26.7 minutes to collect the medication lists. The community pharmacist said the medication lists were complete and that having the medication lists saved time and allowed him to focus on assessment, recommendations, and education during the targeted MTM session. Fidelity was high, with an overall proportion of asked scripted probes of 83.75% (95% confidence interval [CI], 80.62-86.88%). Accuracy was also high for both prescription (95.1%; 95% CI, 94.3-95.8%) and nonprescription (90.5%; 95% CI, 89.4-91.4%) medications. Trained student pharmacists were able to use an interview tool to collect and document medication lists with a high degree of fidelity and accuracy. This study suggests that student pharmacists or trained technicians may be able to collect patient medication lists to facilitate MTM sessions in the community pharmacy setting. Evaluating the sustainability of using student pharmacists or trained technicians to collect medication lists is needed. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Margolis, Amanda R.; Martin, Beth A.; Mott, David A.
2016-01-01
Objective To determine the feasibility and fidelity of student pharmacists collecting patient medication list information using a structured interview tool and the accuracy of documenting the information. The medication lists were used by a community pharmacist to provide a targeted medication therapy management (MTM) intervention. Design Descriptive analysis of patient medication lists collected via telephone interviews. Participants 10 trained student pharmacists collected the medication lists. Intervention Trained student pharmacists conducted audio-recorded telephone interviews with 80 English-speaking community dwelling older adults using a structured interview tool to collect and document medication lists. Main outcome measures Feasibility was measured using the number of completed interviews, the time student pharmacists took to collect the information, and pharmacist feedback. Fidelity to the interview tool was measured by assessing student pharmacists’ adherence to asking all scripted questions and probes. Accuracy was measured by comparing the audio recorded interviews to the medication list information documented in an electronic medical record. Results On average it took student pharmacists 26.7 minutes to collect the medication lists. The community pharmacist said the medication lists were complete and that having the medication lists saved time and allowed him to focus on assessment, recommendations, and education during the targeted MTM session. Fidelity was high with an overall proportion of asked scripted probes of 83.75% (95%CI: 80.62–86.88%). Accuracy was also high for both prescription (95.1%, 95%CI: 94.3–95.8%) and non-prescription (90.5%, 95%CI: 89.4–91.4%) medications. Conclusion Trained student pharmacists were able to use an interview tool to collect and document medication lists with a high degree of fidelity and accuracy. This study suggests that student pharmacists or trained technicians may be able to collect patient medication lists to facilitate MTM sessions in the community pharmacy setting. Evaluating the sustainability of using student pharmacists or trained technicians to collect medication lists is needed. PMID:27000165
Quantifying Uncertainties in Navigation and Orbit Propagation Analyses
NASA Technical Reports Server (NTRS)
Krieger, Andrew W.; Welch, Bryan W.
2017-01-01
A tool used to calculate dilution of precision (DOP) was created in order to assist the Space Communications and Navigation (SCaN) program to analyze current and future user missions. The SCaN Center for Engineering, Networks, Integration, and Communication (SCENIC) is developing a new user interface (UI) to augment and replace the capabilities of currently used commercial software, such as Systems Tool Kit (STK). The DOP tool will be integrated in the SCENIC UI and will be used to analyze the accuracy of navigation solutions. This tool was developed using MATLAB and free and open-source tools to save cost and to use already existing orbital software libraries. GPS DOP data was collected and used for validation purposes. The similarities between the DOP tool results and GPS data show that the DOP tool is performing correctly. Additional improvements can be made in the DOP tool to improve its accuracy and performance in analyzing navigation solutions.
Visual Impairment Screening Assessment (VISA) tool: pilot validation.
Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire
2018-03-06
To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Precision tool holder with flexure-adjustable, three degrees of freedom for a four-axis lathe
Bono, Matthew J [Pleasanton, CA; Hibbard, Robin L [Livermore, CA
2008-03-04
A precision tool holder for precisely positioning a single point cutting tool on 4-axis lathe, such that the center of the radius of the tool nose is aligned with the B-axis of the machine tool, so as to facilitate the machining of precision meso-scale components with complex three-dimensional shapes with sub-.mu.m accuracy on a four-axis lathe. The device is designed to fit on a commercial diamond turning machine and can adjust the cutting tool position in three orthogonal directions with sub-micrometer resolution. In particular, the tool holder adjusts the tool position using three flexure-based mechanisms, with two flexure mechanisms adjusting the lateral position of the tool to align the tool with the B-axis, and a third flexure mechanism adjusting the height of the tool. Preferably, the flexures are driven by manual micrometer adjusters. In this manner, this tool holder simplifies the process of setting a tool with sub-.mu.m accuracy, to substantially reduce the time required to set the tool.
Plazzotta, Fernando; Otero, Carlos; Luna, Daniel; de Quiros, Fernan Gonzalez Bernaldo
2013-01-01
Physicians do not always keep the problem list accurate, complete and updated. To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs. Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list. NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.
Hanmer, Lyn; Nicola, Edward; Bradshaw, Debbie
2017-01-01
The quality of morbidity data in multiple routine inpatient records in a sample of South African hospitals is being assessed in terms of data accuracy and completeness. Extensive modification of available data collection tools was required to make it possible to collect the required data for the study.
Depression Case Finding in Individuals with Dementia: A Systematic Review and Meta-Analysis.
Goodarzi, Zahra S; Mele, Bria S; Roberts, Derek J; Holroyd-Leduc, Jayna
2017-05-01
To compare the diagnostic accuracy of depression case finding tools with a criterion standard in the outpatient setting among adults with dementia. Systematic review and meta-analysis. Studies of older outpatients with dementia. Elderly outpatients (clinic and long-term care) with dementia (N = 3,035). Prevalence of major depression and diagnostic accuracy measures including sensitivity, specificity, and likelihood ratios. From the 11,539 citations, 20 studies were included for qualitative synthesis and 15 for a meta-analysis. Tools included were the Montgomery Åsberg Depression Rating Scale, Cornell Scale for Depression in Dementia (CSDD), Geriatric Depression Scale (GDS), Center for Epidemiologic Studies Depression Scale (CES-D), Hamilton Depression Rating Scale (HDRS), Single Question, Nijmegen Observer-Rated Depression Scale, and Even Briefer Assessment Scale-Depression. The pooled prevalence of depression in individuals with dementia was 30.3% (95% CI = 22.1-38.5). The average age was 75.2 (95% CI = 71.7-78.7), and mean Mini-Mental State Examination scores ranged from 11.2 to 24. The diagnostic accuracy of the individual tools was pooled for the best-reported cutoffs and for each cutoff, if available. The CSDD had a sensitivity of 0.84 (95% CI = 0.73-0.91) and a specificity of 0.80 (95% CI = 0.65-0.90), the 30-item GDS (GDS-30) had a sensitivity of 0.62 (95% CI = 0.45-0.76) and a specificity 0.81 (95% CI = 0.75-0.85), and the HDRS had a sensitivity of 0.86 (95% CI = 0.63-0.96) and a specificity of 0.84 (95% CI = 0.76-0.90). Summary statistics for all tools across best-reported cutoffs had significant heterogeneity. There are many validated tools for the detection of depression in individuals with dementia. Tools that incorporate a physician interview with patient and collateral histories, the CSDD and HDRS, have higher sensitivities, which would ensure fewer false-negatives. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan
2016-11-15
Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.
Screening for Speech and Language Delay in Children 5 Years Old and Younger: A Systematic Review.
Wallace, Ina F; Berkman, Nancy D; Watson, Linda R; Coyne-Beasley, Tamera; Wood, Charles T; Cullen, Katherine; Lohr, Kathleen N
2015-08-01
No recommendation exists for or against routine use of brief, formal screening instruments in primary care to detect speech and language delay in children through 5 years of age. This review aimed to update the evidence on screening and treating children for speech and language since the 2006 US Preventive Services Task Force systematic review. Medline, the Cochrane Library, PsycInfo, Cumulative Index to Nursing and Allied Health Literature, ClinicalTrials.gov, and reference lists. We included studies reporting diagnostic accuracy of screening tools and randomized controlled trials reporting benefits and harms of treatment of speech and language. Two independent reviewers extracted data, checked accuracy, and assigned quality ratings using predefined criteria. We found no evidence for the impact of screening on speech and language outcomes. In 23 studies evaluating the accuracy of screening tools, sensitivity ranged between 50% and 94%, and specificity ranged between 45% and 96%. Twelve treatment studies improved various outcomes in language, articulation, and stuttering; little evidence emerged for interventions improving other outcomes or for adverse effects of treatment. Risk factors associated with speech and language delay were male gender, family history, and low parental education. A limitation of this review is the lack of well-designed, well-conducted studies addressing whether screening for speech and language delay or disorders improves outcomes. Several screening tools can accurately identify children for diagnostic evaluations and interventions, but evidence is inadequate regarding applicability in primary care settings. Some treatments for young children identified with speech and language delays and disorders may be effective. Copyright © 2015 by the American Academy of Pediatrics.
Mayoral, Víctor; Pérez-Hernández, Concepción; Muro, Inmaculada; Leal, Ana; Villoria, Jesús; Esquivias, Ana
2018-04-27
Based on the clear neuroanatomical delineation of many neuropathic pain (NP) symptoms, a simple tool for performing a short structured clinical encounter based on the IASP diagnostic criteria was developed to identify NP. This study evaluated its accuracy and usefulness. A case-control study was performed in 19 pain clinics within Spain. A pain clinician used the experimental screening tool (the index test, IT) to assign the descriptions of non-neuropathic (nNP), non-localized neuropathic (nLNP), and localized neuropathic (LNP) to the patients' pain conditions. The reference standard was a formal clinical diagnosis provided by another pain clinician. The accuracy of the IT was compared with that of the Douleur Neuropathique en 4 questions (DN4) and the Leeds Assessment of Neuropathic Signs and Symptoms (LANSS). Six-hundred and sixty-six patients were analyzed. There was a good agreement between the IT and the reference standard (kappa =0.722). The IT was accurate in distinguishing between LNP and nLNP (83.2% sensitivity, 88.2% specificity), between LNP and the other pain categories (nLNP + nNP) (80.0% sensitivity, 90.7% specificity), and between NP and nNP (95.5% sensitivity, 89.1% specificity). The accuracy in distinguishing between NP and nNP was comparable with that of the DN4 and the LANSS. The IT took a median of 10 min to complete. A novel instrument based on an operationalization of the IASP criteria can not only discern between LNP and nLNP, but also provide a high level of diagnostic certainty about the presence of NP after a short clinical encounter.
Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias
2014-11-01
Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.
An update of the appraisal of the accuracy and utility of cervical discography in chronic neck pain.
Onyewu, Obi; Manchikanti, Laxmaiah; Falco, Frank J E; Singh, Vijay; Geffert, Stephanie; Helm, Standiford; Cohen, Steven P; Hirsch, Joshua A
2012-01-01
Chronic neck pain represents a significant public health problem. Despite high prevalence rates, there is a lack of consensus regarding the causes or treatments for this condition. Based on controlled evaluations, the cervical intervertebral discs, facet joints, and atlantoaxial joints have all been implicated as pain generators. Cervical provocation discography, which includes disc stimulation and morphological evaluation, is occasionally used to distinguish a painful disc from other potential sources of pain. Yet in the absence of validation and controlled outcome studies, the procedure remains mired in controversy. A systematic review of the diagnostic accuracy of cervical discography. To systematically evaluate and update the diagnostic accuracy of cervical discography. The available literature on cervical discography was reviewed. Methodological quality assessment of included studies was performed using Quality Appraisal of Reliability Studies (QAREL). Only diagnostic accuracy studies meeting at least 50% of the designated inclusion criteria were utilized for analysis. However, studies scoring less than 50% are presented descriptively and analyzed critically. The level of evidence was classified as good, fair, and limited or poor based on the quality of evidence developed by the U.S. Preventive Services Task Force (USPSTF).Data sources included relevant literature identified through searches of PubMed and EMBASE from 1966 to June 2012, and manual searches of the bibliographies of known primary and review articles. A total of 41 manuscripts were considered for accuracy and utility of cervical discography in chronic neck pain. There were 23 studies evaluating accuracy of discography. There were 3 studies meeting inclusion criteria for assessing the accuracy and prevalence of discography, with a prevalence of 16% to 53%. Based on modified Agency for Healthcare Research and Quality (AHRQ) accuracy evaluation and United States Preventive Services Task Force (USPSTF) level of evidence criteria, this systematic review indicates the strength of evidence is limited for the diagnostic accuracy of cervical discography. Limitations include a paucity of literature, poor methodological quality, and very few studies performed utilizing International Association for the Study of Pain (IASP) criteria. There is limited evidence for the diagnostic accuracy of cervical discography. Nevertheless, in the absence of any other means to establish a relationship between pathology and symptoms, cervical provocation discography may be an important evaluation tool in certain contexts to identify a subset of patients with chronic neck pain secondary to intervertebral disc disorders. Based on the current systematic review, cervical provocation discography performed according to the IASP criteria with control disc(s), and a minimum provoked pain intensity of 7 of 10, or at least 70% reproduction of worst pain (i.e. worst spontaneous pain of 7 = 7 x 70% = 5), may be a useful tool for evaluating chronic pain and cervical disc abnormalities in a small proportion of patients.
State of Jet Noise Prediction-NASA Perspective
NASA Technical Reports Server (NTRS)
Bridges, James E.
2008-01-01
This presentation covers work primarily done under the Airport Noise Technical Challenge portion of the Supersonics Project in the Fundamental Aeronautics Program. To provide motivation and context, the presentation starts with a brief overview of the Airport Noise Technical Challenge. It then covers the state of NASA s jet noise prediction tools in empirical, RANS-based, and time-resolved categories. The empirical tools, requires seconds to provide a prediction of noise spectral directivity with an accuracy of a few dB, but only for axisymmetric configurations. The RANS-based tools are able to discern the impact of three-dimensional features, but are currently deficient in predicting noise from heated jets and jets with high speed and require hours to produce their prediction. The time-resolved codes are capable of predicting resonances and other time-dependent phenomena, but are very immature, requiring months to deliver predictions without unknown accuracies and dependabilities. In toto, however, when one considers the progress being made it appears that aeroacoustic prediction tools are soon to approach the level of sophistication and accuracy of aerodynamic engineering tools.
Machine learning for the meta-analyses of microbial pathogens' volatile signatures.
Palma, Susana I C J; Traguedo, Ana P; Porteira, Ana R; Frias, Maria J; Gamboa, Hugo; Roque, Ana C A
2018-02-20
Non-invasive and fast diagnostic tools based on volatolomics hold great promise in the control of infectious diseases. However, the tools to identify microbial volatile organic compounds (VOCs) discriminating between human pathogens are still missing. Artificial intelligence is increasingly recognised as an essential tool in health sciences. Machine learning algorithms based in support vector machines and features selection tools were here applied to find sets of microbial VOCs with pathogen-discrimination power. Studies reporting VOCs emitted by human microbial pathogens published between 1977 and 2016 were used as source data. A set of 18 VOCs is sufficient to predict the identity of 11 microbial pathogens with high accuracy (77%), and precision (62-100%). There is one set of VOCs associated with each of the 11 pathogens which can predict the presence of that pathogen in a sample with high accuracy and precision (86-90%). The implemented pathogen classification methodology supports future database updates to include new pathogen-VOC data, which will enrich the classifiers. The sets of VOCs identified potentiate the improvement of the selectivity of non-invasive infection diagnostics using artificial olfaction devices.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
Cardiovascular risk prediction tools for populations in Asia.
Barzi, F; Patel, A; Gu, D; Sritara, P; Lam, T H; Rodgers, A; Woodward, M
2007-02-01
Cardiovascular risk equations are traditionally derived from the Framingham Study. The accuracy of this approach in Asian populations, where resources for risk factor measurement may be limited, is unclear. To compare "low-information" equations (derived using only age, systolic blood pressure, total cholesterol and smoking status) derived from the Framingham Study with those derived from the Asian cohorts, on the accuracy of cardiovascular risk prediction. Separate equations to predict the 8-year risk of a cardiovascular event were derived from Asian and Framingham cohorts. The performance of these equations, and a subsequently "recalibrated" Framingham equation, were evaluated among participants from independent Chinese cohorts. Six cohort studies from Japan, Korea and Singapore (Asian cohorts); six cohort studies from China; the Framingham Study from the US. 172,077 participants from the Asian cohorts; 25,682 participants from Chinese cohorts and 6053 participants from the Framingham Study. In the Chinese cohorts, 542 cardiovascular events occurred during 8 years of follow-up. Both the Asian cohorts and the Framingham equations discriminated cardiovascular risk well in the Chinese cohorts; the area under the receiver-operator characteristic curve was at least 0.75 for men and women. However, the Framingham risk equation systematically overestimated risk in the Chinese cohorts by an average of 276% among men and 102% among women. The corresponding average overestimation using the Asian cohorts equation was 11% and 10%, respectively. Recalibrating the Framingham risk equation using cardiovascular disease incidence from the non-Chinese Asian cohorts led to an overestimation of risk by an average of 4% in women and underestimation of risk by an average of 2% in men. A low-information Framingham cardiovascular risk prediction tool, which, when recalibrated with contemporary data, is likely to estimate future cardiovascular risk with similar accuracy in Asian populations as tools developed from data on local cohorts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
FNA diagnostic value in patients with neck masses in two teaching hospitals in Iran.
Saatian, Minoo; Badie, Banafsheh Moradmand; Shahriari, Sogol; Fattahi, Fahimeh; Rasoolinejad, Mehrnaz
2011-01-01
The FNA (fine needle aspiration) procedure is simple, inexpensive, available and a safe method for the diagnosis of a neck mass. FNA has numerous advantages over open surgical biopsies as an initial diagnostic tool; therefore we decided to compare the accuracy of this method with open biopsy. This retrospective as well as descriptive study comparing preoperative FNA results with existing data in the Pathology Department in Bu-Ali and Amir Alam Hospitals. Our study included 100 patients with neck masses of which 22 were thyroid masses, 31 were salivary gland masses, and 47 were other masses. Age ranged from 3 years to 80 years with the mean age of 42.6 years. There were 59 men and 41 women. The Sensitivity was 72%, Specificity 87%, PPV 85%, NPV 75% and diagnostic Accuracy 79%. In this study we had also 26% false negative and 15% false positive. FNA is a valuable diagnostic tool in the management of neck masses; also it has been used for staging and planning of treatment for the wide and metastatic malignancy. This technique reduces the need for more invasive and costly procedures. According to the high sensitivity and high accuracy in this study, FNA can be used as the first step of diagnoses test in neck masses.
SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Wu, G; Gao, Y
2016-06-15
Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less
Sawchuk, Dena; Currie, Kris; Vich, Manuel Lagravere; Palomo, Juan Martin
2016-01-01
Objective To evaluate the accuracy and reliability of the diagnostic tools available for assessing maxillary transverse deficiencies. Methods An electronic search of three databases was performed from their date of establishment to April 2015, with manual searching of reference lists of relevant articles. Articles were considered for inclusion if they reported the accuracy or reliability of a diagnostic method or evaluation technique for maxillary transverse dimensions in mixed or permanent dentitions. Risk of bias was assessed in the included articles, using the Quality Assessment of Diagnostic Accuracy Studies tool-2. Results Nine articles were selected. The studies were heterogeneous, with moderate to low methodological quality, and all had a high risk of bias. Four suggested that the use of arch width prediction indices with dental cast measurements is unreliable for use in diagnosis. Frontal cephalograms derived from cone-beam computed tomography (CBCT) images were reportedly more reliable for assessing intermaxillary transverse discrepancies than posteroanterior cephalograms. Two studies proposed new three-dimensional transverse analyses with CBCT images that were reportedly reliable, but have not been validated for clinical sensitivity or specificity. No studies reported sensitivity, specificity, positive or negative predictive values or likelihood ratios, or ROC curves of the methods for the diagnosis of transverse deficiencies. Conclusions Current evidence does not enable solid conclusions to be drawn, owing to a lack of reliable high quality diagnostic studies evaluating maxillary transverse deficiencies. CBCT images are reportedly more reliable for diagnosis, but further validation is required to confirm CBCT's accuracy and diagnostic superiority. PMID:27668196
Precision, accuracy, and efficiency of four tools for measuring soil bulk density or strength.
Richard E. Miller; John Hazard; Steven Howes
2001-01-01
Monitoring soil compaction is time consuming. A desire for speed and lower costs, however, must be balanced with the appropriate precision and accuracy required of the monitoring task. We compared three core samplers and a cone penetrometer for measuring soil compaction after clearcut harvest on a stone-free and a stony soil. Precision (i.e., consistency) of each tool...
Leurs, G; O'Connell, C P; Andreotti, S; Rutzen, M; Vonk Noordegraaf, H
2015-06-01
This study employed a non-lethal measurement tool, which combined an existing photo-identification technique with a surface, parallel laser photogrammetry technique, to accurately estimate the size of free-ranging white sharks Carcharodon carcharias. Findings confirmed the hypothesis that surface laser photogrammetry is more accurate than crew-based estimations that utilized a shark cage of known size as a reference tool. Furthermore, field implementation also revealed that the photographer's angle of reference and the shark's body curvature could greatly influence technique accuracy, exposing two limitations. The findings showed minor inconsistencies with previous studies that examined pre-caudal to total length ratios of dead specimens. This study suggests that surface laser photogrammetry can successfully increase length estimation accuracy and illustrates the potential utility of this technique for growth and stock assessments on free-ranging marine organisms, which will lead to an improvement of the adaptive management of the species. © 2015 The Fisheries Society of the British Isles.
New tools for evaluating LQAS survey designs
2014-01-01
Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the ‘grey region’ are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions. PMID:24528928
New tools for evaluating LQAS survey designs.
Hund, Lauren
2014-02-15
Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the 'grey region' are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions.
Simulation of radiation damping in rings, using stepwise ray-tracing methods
Meot, F.
2015-06-26
The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less
Hehmke, Bernd; Berg, Sabine; Salzsieder, Eckhard
2017-05-01
Continuous standardized verification of the accuracy of blood glucose meter systems for self-monitoring after their introduction into the market is an important clinically tool to assure reliable performance of subsequently released lots of strips. Moreover, such published verification studies permit comparison of different blood glucose monitoring systems and, thus, are increasingly involved in the process of evidence-based purchase decision making.
Marker Configuration Model-Based Roentgen Fluoroscopic Analysis.
Garling, Eric H; Kaptein, Bart L; Geleijns, Koos; Nelissen, Rob G H H; Valstar, Edward R
2005-04-01
It remains unknown if and how the polyethylene bearing in mobile bearing knees moves during dynamic activities with respect to the tibial base plate. Marker Configuration Model-Based Roentgen Fluoroscopic Analysis (MCM-based RFA) uses a marker configuration model of inserted tantalum markers in order to accurately estimate the pose of an implant or bone using single plane Roentgen images or fluoroscopic images. The goal of this study is to assess the accuracy of (MCM-Based RFA) in a standard fluoroscopic set-up using phantom experiments and to determine the error propagation with computer simulations. The experimental set-up of the phantom study was calibrated using a calibration box equipped with 600 tantalum markers, which corrected for image distortion and determined the focus position. In the computer simulation study the influence of image distortion, MC-model accuracy, focus position, the relative distance between MC-models and MC-model configuration on the accuracy of MCM-Based RFA were assessed. The phantom study established that the in-plane accuracy of MCM-Based RFA is 0.1 mm and the out-of-plane accuracy is 0.9 mm. The rotational accuracy is 0.1 degrees. A ninth-order polynomial model was used to correct for image distortion. Marker-Based RFA was estimated to have, in a worst case scenario, an in vivo translational accuracy of 0.14 mm (x-axis), 0.17 mm (y-axis), 1.9 mm (z-axis), respectively, and a rotational accuracy of 0.3 degrees. When using fluoroscopy to study kinematics, image distortion and the accuracy of models are important factors, which influence the accuracy of the measurements. MCM-Based RFA has the potential to be an accurate, clinically useful tool for studying kinematics after total joint replacement using standard equipment.
Accuracy of ECG interpretation in competitive athletes: the impact of using standised ECG criteria.
Drezner, Jonathan A; Asif, Irfan M; Owens, David S; Prutkin, Jordan M; Salerno, Jack C; Fean, Robyn; Rao, Ashwin L; Stout, Karen; Harmon, Kimberly G
2012-04-01
Interpretation of ECGs in athletes is complicated by physiological changes related to training. The purpose of this study was to determine the accuracy of ECG interpretation in athletes among different physician specialties, with and without use of a standised ECG criteria tool. Physicians were asked to interpret 40 ECGs (28 normal ECGs from college athletes randomised with 12 abnormal ECGs from individuals with known ciovascular pathology) and classify each ECG as (1) 'normal or variant--no further evaluation and testing needed' or (2) 'abnormal--further evaluation and testing needed.' After reading the ECGs, participants received a two-page ECG criteria tool to guide interpretation of the ECGs again. A total of 60 physicians participated: 22 primary care (PC) residents, 16 PC attending physicians, 12 sports medicine (SM) physicians and 10 ciologists. At baseline, the total number of ECGs correctly interpreted was PC residents 73%, PC attendings 73%, SM physicians 78% and ciologists 85%. With use of the ECG criteria tool, all physician groups significantly improved their accuracy (p<0.0001): PC residents 92%, PC attendings 90%, SM physicians 91% and ciologists 96%. With use of the ECG criteria tool, specificity improved from 70% to 91%, sensitivity improved from 89% to 94% and there was no difference comparing ciologists versus all other physicians (p=0.053). Providing standised criteria to assist ECG interpretation in athletes significantly improves the ability to accurately distinguish normal from abnormal findings across physician specialties, even in physicians with little or no experience.
Richardson, Philip; Greenslade, Jaimi; Shanmugathasan, Sulochana; Doucet, Katherine; Widdicombe, Neil; Chu, Kevin; Brown, Anthony
2015-01-01
CARING is a screening tool developed to identify patients who have a high likelihood of death in 1 year. This study sought to validate a modified CARING tool (termed PREDICT) using a population of patients presenting to the Emergency Department. In total, 1000 patients aged over 55 years who were admitted to hospital via the Emergency Department between January and June 2009 were eligible for inclusion in this study. Data on the six prognostic indicators comprising PREDICT were obtained retrospectively from patient records. One-year mortality data were obtained from the State Death Registry. Weights were applied to each PREDICT criterion, and its final score ranged from 0 to 44. Receiver operator characteristic analyses and diagnostic accuracy statistics were used to assess the accuracy of PREDICT in identifying 1-year mortality. The sample comprised 976 patients with a median (interquartile range) age of 71 years (62-81 years) and a 1-year mortality of 23.4%. In total, 50% had ≥1 PREDICT criteria with a 1-year mortality of 40.4%. Receiver operator characteristic analysis gave an area under the curve of 0.86 (95% confidence interval: 0.83-0.89). Using a cut-off of 13 points, PREDICT had a 95.3% (95% confidence interval: 93.6-96.6) specificity and 53.9% (95% confidence interval: 47.5-60.3) sensitivity for predicting 1-year mortality. PREDICT was simpler than the CARING criteria and identified 158 patients per 1000 admitted who could benefit from advance care planning. PREDICT was successfully applied to the Australian healthcare system with findings similar to the original CARING study conducted in the United States. This tool could improve end-of-life care by identifying who should have advance care planning or an advance healthcare directive. © The Author(s) 2014.
Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J
2013-09-01
To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.
Nutritional screening in hospitalized pediatric patients: a systematic review.
Teixeira, Adriana Fonseca; Viana, Kátia Danielle Araújo Lourenço
2016-01-01
This systematic review aimed to verify the available scientific evidence on the clinical performance and diagnostic accuracy of nutritional screening tools in hospitalized pediatric patients. A search was performed in the Medline (National Library of Medicine United States), LILACS (Latin American and Caribbean Health Sciences), PubMed (US National Library of Medicine National Institutes of Health), in the SCIELO (Scientific Electronic Library Online), through CAPES portal (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior), bases Scopus e Web of Science. The descriptors used in accordance with the Descriptors in Health Sciences (DeCS)/Medical Subject Headings (MeSH) list were "malnutrition", "screening", and "pediatrics", as well as the equivalent words in Portuguese. The authors identified 270 articles published between 2004 and 2014. After applying the selection criteria, 35 were analyzed in full and eight articles were included in the systematic review. We evaluated the methodological quality of the studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS). Five nutritional screening tools in pediatrics were identified. Among these, the Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP) showed high sensitivity, almost perfect inter-rater agreement and between the screening and the reference standard; the Screening Tool Risk on Nutritional Status and Growth (STRONGkids) showed high sensitivity, lower percentage of specificity, substantial intra-rater agreement, and ease of use in clinical practice. The studies included in this systematic review showed good performance of the nutritional screening tools in pediatrics, especially STRONGkids and STAMP. The authors emphasize the need to perform for more studies in this area. Only one tool was translated and adapted to the Brazilian pediatric population, and it is essential to carry out studies of tool adaptation and validation for this population. Copyright © 2016 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Senol Cali, Damla; Kim, Jeremie S; Ghose, Saugata; Alkan, Can; Mutlu, Onur
2018-04-02
Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology.
Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel
NASA Astrophysics Data System (ADS)
Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie
2011-05-01
Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
Fidalgo, Bruno M R; Crabb, David P; Lawrenson, John G
2015-05-01
To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Lange, Berit; Cohn, Jennifer; Roberts, Teri; Camp, Johannes; Chauffour, Jeanne; Gummadi, Nina; Ishizaki, Azumi; Nagarathnam, Anupriya; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Easterbrook, Philippa; Denkinger, Claudia M
2017-11-01
Dried blood spots (DBS) are a convenient tool to enable diagnostic testing for viral diseases due to transport, handling and logistical advantages over conventional venous blood sampling. A better understanding of the performance of serological testing for hepatitis C (HCV) and hepatitis B virus (HBV) from DBS is important to enable more widespread use of this sampling approach in resource limited settings, and to inform the 2017 World Health Organization (WHO) guidance on testing for HBV/HCV. We conducted two systematic reviews and meta-analyses on the diagnostic accuracy of HCV antibody (HCV-Ab) and HBV surface antigen (HBsAg) from DBS samples compared to venous blood samples. MEDLINE, EMBASE, Global Health and Cochrane library were searched for studies that assessed diagnostic accuracy with DBS and agreement between DBS and venous sampling. Heterogeneity of results was assessed and where possible a pooled analysis of sensitivity and specificity was performed using a bivariate analysis with maximum likelihood estimate and 95% confidence intervals (95%CI). We conducted a narrative review on the impact of varying storage conditions or limits of detection in subsets of samples. The QUADAS-2 tool was used to assess risk of bias. For the diagnostic accuracy of HBsAg from DBS compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review. Pooled sensitivity and specificity were 98% (95%CI:95%-99%) and 100% (95%CI:99-100%), respectively. For the diagnostic accuracy of HCV-Ab from DBS, 19 studies were included in a pooled quantitative meta-analysis, and 23 studies were included in a narrative review. Pooled estimates of sensitivity and specificity were 98% (CI95%:95-99) and 99% (CI95%:98-100), respectively. Overall quality of studies and heterogeneity were rated as moderate in both systematic reviews. HCV-Ab and HBsAg testing using DBS compared to venous blood sampling was associated with excellent diagnostic accuracy. However, generalizability is limited as no uniform protocol was applied and most studies did not use fresh samples. Future studies on diagnostic accuracy should include an assessment of impact of environmental conditions common in low resource field settings. Manufacturers also need to formally validate their assays for DBS for use with their commercial assays.
Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He
2015-01-01
Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood or PF, are effective tools for diagnosis of HF. Additional studies are needed to rigorously evaluate the diagnostic accuracy of PF and blood MR-proANP and BNP for the diagnosis of HF. PMID:26244664
A visual analytics approach for pattern-recognition in patient-generated data.
Feller, Daniel J; Burgermaster, Marissa; Levine, Matthew E; Smaldone, Arlene; Davidson, Patricia G; Albers, David J; Mamykina, Lena
2018-06-13
To develop and test a visual analytics tool to help clinicians identify systematic and clinically meaningful patterns in patient-generated data (PGD) while decreasing perceived information overload. Participatory design was used to develop Glucolyzer, an interactive tool featuring hierarchical clustering and a heatmap visualization to help registered dietitians (RDs) identify associative patterns between blood glucose levels and per-meal macronutrient composition for individuals with type 2 diabetes (T2DM). Ten RDs participated in a within-subjects experiment to compare Glucolyzer to a static logbook format. For each representation, participants had 25 minutes to examine 1 month of diabetes self-monitoring data captured by an individual with T2DM and identify clinically meaningful patterns. We compared the quality and accuracy of the observations generated using each representation. Participants generated 50% more observations when using Glucolyzer (98) than when using the logbook format (64) without any loss in accuracy (69% accuracy vs 62%, respectively, p = .17). Participants identified more observations that included ingredients other than carbohydrates using Glucolyzer (36% vs 16%, p = .027). Fewer RDs reported feelings of information overload using Glucolyzer compared to the logbook format. Study participants displayed variable acceptance of hierarchical clustering. Visual analytics have the potential to mitigate provider concerns about the volume of self-monitoring data. Glucolyzer helped dietitians identify meaningful patterns in self-monitoring data without incurring perceived information overload. Future studies should assess whether similar tools can support clinicians in personalizing behavioral interventions that improve patient outcomes.
Esmaily, Habibollah; Tayefi, Maryam; Doosti, Hassan; Ghayour-Mobarhan, Majid; Nezami, Hossein; Amirabadizadeh, Alireza
2018-04-24
We aimed to identify the associated risk factors of type 2 diabetes mellitus (T2DM) using data mining approach, decision tree and random forest techniques using the Mashhad Stroke and Heart Atherosclerotic Disorders (MASHAD) Study program. A cross-sectional study. The MASHAD study started in 2010 and will continue until 2020. Two data mining tools, namely decision trees, and random forests, are used for predicting T2DM when some other characteristics are observed on 9528 subjects recruited from MASHAD database. This paper makes a comparison between these two models in terms of accuracy, sensitivity, specificity and the area under ROC curve. The prevalence rate of T2DM was 14% among these subjects. The decision tree model has 64.9% accuracy, 64.5% sensitivity, 66.8% specificity, and area under the ROC curve measuring 68.6%, while the random forest model has 71.1% accuracy, 71.3% sensitivity, 69.9% specificity, and area under the ROC curve measuring 77.3% respectively. The random forest model, when used with demographic, clinical, and anthropometric and biochemical measurements, can provide a simple tool to identify associated risk factors for type 2 diabetes. Such identification can substantially use for managing the health policy to reduce the number of subjects with T2DM .
The microcomputer scientific software series 4: testing prediction accuracy.
H. Michael Rauscher
1986-01-01
A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.
An evaluation of the accuracy and speed of metagenome analysis tools
Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.
2016-01-01
Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510
Harris, Adam; Harries, Priscilla
2016-01-01
Background Prognostic accuracy in palliative care is valued by patients, carers, and healthcare professionals. Previous reviews suggest clinicians are inaccurate at survival estimates, but have only reported the accuracy of estimates on patients with a cancer diagnosis. Objectives To examine the accuracy of clinicians’ estimates of survival and to determine if any clinical profession is better at doing so than another. Data Sources MEDLINE, Embase, CINAHL, and the Cochrane Database of Systematic Reviews and Trials. All databases were searched from the start of the database up to June 2015. Reference lists of eligible articles were also checked. Eligibility Criteria Inclusion criteria: patients over 18, palliative population and setting, quantifiable estimate based on real patients, full publication written in English. Exclusion criteria: if the estimate was following an intervention, such as surgery, or the patient was artificially ventilated or in intensive care. Study Appraisal and Synthesis Methods A quality assessment was completed with the QUIPS tool. Data on the reported accuracy of estimates and information about the clinicians were extracted. Studies were grouped by type of estimate: categorical (the clinician had a predetermined list of outcomes to choose from), continuous (open-ended estimate), or probabilistic (likelihood of surviving a particular time frame). Results 4,642 records were identified; 42 studies fully met the review criteria. Wide variation was shown with categorical estimates (range 23% to 78%) and continuous estimates ranged between an underestimate of 86 days to an overestimate of 93 days. The four papers which used probabilistic estimates tended to show greater accuracy (c-statistics of 0.74–0.78). Information available about the clinicians providing the estimates was limited. Overall, there was no clear “expert” subgroup of clinicians identified. Limitations High heterogeneity limited the analyses possible and prevented an overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other. Implications of Key Findings Further research is needed to understand how clinical predictions are formulated and how their accuracy can be improved. PMID:27560380
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
Corralejo, Rebeca; Nicolás-Alonso, Luis F; Alvarez, Daniel; Hornero, Roberto
2014-10-01
The present study aims at developing and assessing an assistive tool for operating electronic devices at home by means of a P300-based brain-computer interface (BCI). Fifteen severely impaired subjects participated in the study. The developed tool allows users to interact with their usual environment fulfilling their main needs. It allows for navigation through ten menus and to manage up to 113 control commands from eight electronic devices. Ten out of the fifteen subjects were able to operate the proposed tool with accuracy above 77 %. Eight out of them reached accuracies higher than 95 %. Moreover, bitrates up to 20.1 bit/min were achieved. The novelty of this study lies in the use of an environment control application in a real scenario: real devices managed by potential BCI end-users. Although impaired users might not be able to set up this system without aid of others, this study takes a significant step to evaluate the degree to which such populations could eventually operate a stand-alone system. Our results suggest that neither the type nor the degree of disability is a relevant issue to suitably operate a P300-based BCI. Hence, it could be useful to assist disabled people at home improving their personal autonomy.
Scarponi, L.; Pedrali, S.; Pizzorni, N.; Pinotti, C.; Foieni, F.; Zuccotti, G.; Schindler, A.
2017-01-01
SUMMARY The large majority of the available dysphagia screening tools has been developed for the stroke population. Only few screening tools are suitable for heterogeneous groups of patients admitted to a subacute care unit. The Royal Brisbane and Women's Hospital (RBWH) dysphagia screening tool is a nurse-administered, evidence-based swallow screening tool for generic acute hospital use that demonstrates excellent sensitivity and specificity. No Italian version of this tool is available to date. The aim of this study was to determine the reliability and screening accuracy of the Italian version of the RBWH (I-RBWH) dysphagia screening tool. A total of 105 patients consecutively admitted to a subacute care unit were enrolled. Using the I-RBWH tool, each patient was evaluated twice by trained nurses and once by a speech and language pathologist (SLP) blind to nurses' scores. The SLP also performed standardised clinical assessment of swallowing using the Mann assessment of swallowing ability (MASA). During the first and the second administration of the I-RBWH by nurses, 28 and 27 patients, respectively, were considered at risk of dysphagia, and 27 were considered at risk after SLP assessment. Intra- and inter-rater reliability was satisfactory. Comparison between nurse I-RBWH scores and MASA examination demonstrated a sensitivity and specificity of the I-RBWH dysphagia screening tool up to 93% and 96%, respectively; the positive and negative predictive values were 90% and 97%, respectively. Thus, the current findings support the reliability and accuracy of the I-RBWH tool for dysphagia screening of patients in subacute settings. Its application in clinical practice is recommended. PMID:28374867
Yamada, Tomonori; Shimura, Takaya; Ebi, Masahide; Hirata, Yoshikazu; Nishiwaki, Hirotaka; Mizushima, Takashi; Asukai, Koki; Togawa, Shozo; Takahashi, Satoru; Joh, Takashi
2015-01-01
Our recent prospective study found equivalent accuracy of magnifying chromoendoscopy (MC) and endoscopic ultrasonography (EUS) for diagnosing the invasion depth of colorectal cancer (CRC); however, whether these tools show diagnostic differences in categories such as tumor size and morphology remains unclear. Hence, we conducted detailed subset analysis of the prospective data. In this multicenter, prospective, comparative trial, a total of 70 patients with early, flat CRC were enrolled from February 2011 to December 2012, and the results of 66 lesions were finally analyzed. Patients were randomly allocated to primary MC followed by EUS or to primary EUS followed by MC. Diagnoses of invasion depth by each tool were divided into intramucosal to slight submucosal invasion (invasion depth <1000 μm) and deep submucosal invasion (invasion depth ≥1000 μm), and then compared with the final pathological diagnosis by an independent pathologist blinded to clinical data. To standardize diagnoses among examiners, this trial was started after achievement of a mean κ value of ≥0.6 which was calculated from the average of κ values between each pair of participating endoscopists. Both MC and EUS showed similar diagnostic outcomes, with no significant differences in prediction of invasion depth in subset analyses according to tumor size, location, and morphology. Lesions that were consistently diagnosed as Tis/T1-SMS or ≥T1-SMD with both tools revealed accuracy of 76-78%. Accuracy was low in borderline lesions with irregular pit pattern in MC and distorted findings of the third layer in EUS (MC, 58.5%; EUS, 50.0%). MC and EUS showed the same limited accuracy for predicting invasion depth in all categories of early CRC. Since the irregular pit pattern in MC, distorted findings to the third layer in EUS and inconsistent diagnosis between both tools were associated with low accuracy, further refinements or even novel methods are still needed for such lesions. University hospital Medical Information Network Clinical Trials Registry UMIN 000005085.
NASA Astrophysics Data System (ADS)
André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.
Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan
There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a frameworkmore » for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key issues. The value of detail has a lot to do with the type of question(s) being asked by the user (e.g., the availability of dozens of miscellaneous appliances is immaterial for a user attempting to evaluate the potential for space-heating savings by installing a new furnace). More detail does not, according to our evaluation, automatically translate into a ''better'' or ''more accurate'' tool. Efforts to quantify and compare the ''accuracy'' of these tools are difficult at best, and prior tool-comparison studies have not undertaken this in a meaningful way. The ability to evaluate accuracy is inherently limited by the availability of measured data. Furthermore, certain tool outputs can only be measured against ''actual'' values that are themselves calculated (e.g., HVAC sizing), while others are rarely if ever available (e.g., measured energy use or savings for specific measures). Similarly challenging is to understand the sources of inaccuracies. There are many ways in which quantitative errors can occur in tools, ranging from programming errors to problems inherent in a tool's design. Due to hidden assumptions and non-variable ''defaults'', most tools cannot be fully tested across the desirable range of building configurations, operating conditions, weather locations, etc. Many factors conspire to confound performance comparisons among tools. Differences in inputs can range from weather city, to types of HVAC systems, to appliance characteristics, to occupant-driven effects such as thermostat management. Differences in results would thus no doubt emerge from an extensive comparative exercise, but the sources or implications of these differences for the purposes of accuracy evaluation or tool development would remain largely unidentifiable (especially given the paucity of technical documentation available for most tools). For the tools that we tested, the predicted energy bills for a single test building ranged widely (by nearly a factor of three), and far more so at the end-use level. Most tools over-predicted energy bills and all over-predicted consumption. Variability was lower among disk-based tools,but they more significantly over-predicted actual use. The deviations (over-predictions) we observed from actual bills corresponded to up to $1400 per year (approx. 250 percent of the actual bills). For bill-disaggregation tools, wherein the results are forced to equal actual bills, the accuracy issue shifts to whether or not the total is properly attributed to the various end uses and to whether savings calculations are done accurately (a challenge that demands relatively rare end-use data). Here, too, we observed a number of dubious results. Energy savings estimates automatically generated by the web-based tools varied from $46/year (5 percent of predicted use) to $625/year (52 percent of predicted use).« less
Radiological interpretation of images displayed on tablet computers: a systematic review.
Caffery, L J; Armfield, N R; Smith, A C
2015-06-01
To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad(®) (Apple, Cupertino, CA). The included studies reported high sensitivity (84-98%), specificity (74-100%) and accuracy rates (98-100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. The iPad may be appropriate for an on-call radiologist to use for radiological interpretation.
Falls risk assessment outcomes and factors associated with falls for older Indigenous Australians.
Hill, Keith D; Flicker, Leon; LoGiudice, Dina; Smith, Kate; Atkinson, David; Hyde, Zoë; Fenner, Stephen; Skeaf, Linda; Malay, Roslyn; Boyle, Eileen
2016-12-01
To describe the prevalence of falls and associated risk factors in older Indigenous Australians, and compare the accuracy of validated falls risk screening and assessment tools in this population in classifying fall status. Cross-sectional study of 289 Indigenous Australians aged ≥45 years from the Kimberley region of Western Australia who had a detailed assessment including self-reported falls in the past year (n=289), the adapted Elderly Falls Screening Tool (EFST; n=255), and the Falls Risk for Older People-Community (FROP-Com) screening tool (3 items, n=74) and FROP-Com falls assessment tool (n=74). 32% of participants had ≥1 fall in the preceding year, and 37.3% were classified high falls risk using the EFST (cut-off ≥2). In contrast, for the 74 participants assessed with the FROP-Com, only 14.9% were rated high risk, 35.8% moderate risk, and 49.3% low risk. The FROP-Com screen and assessment tools had the highest classification accuracy for identifying fallers in the preceding year (area under curve >0.85), with sensitivity/specificity highest for the FROP-Com assessment (cut-off ≥12), sensitivity=0.84 and specificity=0.73. Falls are common in older Indigenous Australians. The FROP-Com falls risk assessment tool appears useful in this population, and this research suggests changes that may improve its utility further. © 2016 Public Health Association of Australia.
Research on effect of rough surface on FMCW laser radar range accuracy
NASA Astrophysics Data System (ADS)
Tao, Huirong
2018-03-01
The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.
NASA Astrophysics Data System (ADS)
Adi Aizudin Bin Radin Nasirudin, Radin; Meier, Reinhard; Ahari, Carmen; Sievert, Matti; Fiebich, Martin; Rummeny, Ernst J.; No"l, Peter B.
2011-03-01
Optical imaging (OI) is a relatively new method in detecting active inflammation of hand joints of patients suffering from rheumatoid arthritis (RA). With the high number of people affected by this disease especially in western countries, the availability of OI as an early diagnostic imaging method is clinically highly relevant. In this paper, we present a newly in-house developed OI analyzing tool and a clinical evaluation study. Our analyzing tool extends the capability of existing OI tools. We include many features in the tool, such as region-based image analysis, hyper perfusion curve analysis, and multi-modality image fusion to aid clinicians in localizing and determining the intensity of inflammation in joints. Additionally, image data management options, such as the full integration of PACS/RIS, are included. In our clinical study we demonstrate how OI facilitates the detection of active inflammation in rheumatoid arthritis. The preliminary clinical results indicate a sensitivity of 43.5%, a specificity of 80.3%, an accuracy of 65.7%, a positive predictive value of 76.6%, and a negative predictive value of 64.9% in relation to clinical results from MRI. The accuracy of inflammation detection serves as evidence to the potential of OI as a useful imaging modality for early detection of active inflammation in patients with rheumatoid arthritis. With our in-house developed tool we extend the usefulness of OI imaging in the clinical arena. Overall, we show that OI is a fast, inexpensive, non-invasive and nonionizing yet highly sensitive and accurate imaging modality.-
Outcome Measures in Spinal Cord Injury
Alexander, Marcalee S.; Anderson, Kim; Biering-Sorensen, Fin; Blight, Andrew R.; Brannon, Ruth; Bryce, Thomas; Creasey, Graham; Catz, Amiram; Curt, Armin; Donovan, William; Ditunno, John; Ellaway, Peter; Finnerup, Nanna B.; Graves, Daniel E.; Haynes, Beth Ann; Heinemann, Allen W.; Jackson, Amie B.; Johnston, Mark; Kalpakjian, Claire Z.; Kleitman, Naomi; Krassioukov, Andrei; Krogh, Klaus; Lammertse, Daniel; Magasi, Susan; Mulcahey, MJ; Schurch, Brigitte; Sherwood, Arthur; Steeves, John D.; Stiens, Steven; Tulsky, David S.; van Hedel, Hubertus J.A.; Whiteneck, Gale
2009-01-01
Study Design review by the Spinal Cord Outcomes Partnership Endeavor (SCOPE), which is a broad-based international consortium of scientists and clinical researchers representing academic institutions, industry, government agencies, not-for-profit organizations and foundations. Objectives assessment of current and evolving tools for evaluating human spinal cord injury (SCI) outcomes for both clinical diagnosis and clinical research studies. Methods a framework for the appraisal of evidence of metric properties was used to examine outcome tools or tests for accuracy, sensitivity, reliability and validity for human SCI. Results imaging, neurological, functional, autonomic, sexual health, bladder/bowel, pain, and psycho-social tools were evaluated. Several specific tools for human SCI studies have or are being developed to allow the more accurate determination for a clinically meaningful benefit (improvement in functional outcome or quality of life) being achieved as a result of a therapeutic intervention. Conclusion significant progress has been made, but further validation studies are required to identify the most appropriate tools for specific targets in a human SCI study or clinical trial. PMID:19381157
Thermal imaging as a lie detection tool at airports.
Warmelink, Lara; Vrij, Aldert; Mann, Samantha; Leal, Sharon; Forrester, Dave; Fisher, Ronald P
2011-02-01
We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.
Assessment of dysglycemia risk in the Kitikmeot region of Nunavut: using the CANRISK tool
Ying, Jiang; Susan, Rogers Van Katwyk; Yang, Mao; Heather, Orpana; Gina, Agarwal; Margaret, de Groh; Monique, Skinner; Robyn, Clarke
2017-01-01
Abstract Introduction: The Public Health Agency of Canada adapted a Finnish diabetes screening tool (FINDRISC) to create a tool (CANRISK) tailored to Canada’s multi-ethnic population. CANRISK was developed using data collected in seven Canadian provinces. In an effort to extend the applicability of CANRISK to northern territorial populations, we completed a study with the mainly Inuit population in the Kitikmeot region of Nunavut. Methods: We obtained CANRISK questionnaires, physical measures and blood samples from participants in five Nunavut communities in Kitikmeot. We used logistic regression to test model fit using the original CANRISK risk factors for dysglycemia (prediabetes and diabetes). Dysglycemia was assessed using fasting plasma glucose (FPG) alone and/or oral glucose tolerance test. We generated participants’ CANRISK scores to test the functioning of this tool in the Inuit population. Results: A total of 303 individuals participated in the study. Half were aged less than 45 years, two-thirds were female and 84% were Inuit. A total of 18% had prediabetes, and an additional 4% had undiagnosed diabetes. The odds of having dysglycemia rose exponentially with age, while the relationship with BMI was U-shaped. Compared with lab test results, using a cut-off point of 32 the CANRISK tool achieved a sensitivity of 61%, a specificity of 66%, a positive predictive value of 34% and an accuracy rate of 65%. Conclusion: The CANRISK tool achieved a similar accuracy in detecting dysglycemia in this mainly Inuit population as it did in a multi-ethnic sample of Canadians. We found the CANRISK tool to be adaptable to the Kitikmeot region, and more generally to Nunavut. PMID:28402800
Field evaluation of descent advisor trajectory prediction accuracy
DOT National Transportation Integrated Search
1996-07-01
The Descent Advisor (DA) automation tool has undergone a series of field tests : at the Denver Air Route Traffic Control Center to study the feasibility of : DA-based clearances and procedures. The latest evaluation, conducted in the : fall of 1995, ...
Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David
2017-03-01
The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.
Esser, Peter; Hartung, Tim J; Friedrich, Michael; Johansen, Christoffer; Wittchen, Hans-Ulrich; Faller, Hermann; Koch, Uwe; Härter, Martin; Keller, Monika; Schulz, Holger; Wegscheider, Karl; Weis, Joachim; Mehnert, Anja
2018-06-01
Anxiety in cancer patients may represent a normal psychological reaction. To detect patients with pathological levels, appropriate screeners with established cut-offs are needed. Given that previous research is sparse, we investigated the diagnostic accuracy of 2 frequently used screening tools in detecting generalized anxiety disorder (GAD). We used data of a multicenter study including 2141 cancer patients. Diagnostic accuracy was investigated for the Generalized Anxiety Disorder Screener (GAD-7) and the anxiety module of the Hospital Anxiety and Depression Scale (HADS-A). GAD, assessed with the Composite International Diagnostic Interview for Oncology, served as a reference standard. Overall accuracy was measured with the area under the receiver operating characteristics curve (AUC). The AUC of the 2 screeners were statistically compared. We also calculated accuracy measures for selected cut-offs. Diagnostic accuracy could be interpreted as adequate for both screeners, with an identical AUC of .81 (95% CI: .79-.82). Consequently, the 2 screeners did not differ in their performance (P = .86). The best balance between sensitivity and specificity was found for cut-offs ≥7 (GAD-7) and ≥8 (HADS-A). The officially recommended thresholds for the GAD-7 (≥ 10) and the HADS-A (≥11) showed low sensitivities of 55% and 48%, respectively. The GAD-7 and HADS-A showed AUC of adequate diagnostic accuracy and hence are applicable for GAD screening in cancer patients. Nevertheless, the choice of optimal cut-offs should be carefully evaluated. Copyright © 2018 John Wiley & Sons, Ltd.
Geometric validation of a mobile laser scanning system for urban applications
NASA Astrophysics Data System (ADS)
Guan, Haiyan; Li, Jonathan; Yu, Yongtao; Liu, Yan
2016-03-01
Mobile laser scanning (MLS) technologies have been actively studied and implemented over the past decade, as their application fields are rapidly expanding and extending beyond conventional topographic mapping. Trimble's MX-8, as one of the MLS systems in the current market, generates rich survey-grade laser and image data for urban surveying. The objective of this study is to evaluate whether Trimble MX-8 MLS data satisfies the accuracy requirements of urban surveying. According to the formula of geo-referencing, accuracies of navigation solution and laser scanner determines the accuracy of the collected LiDAR point clouds. Two test sites were selected to test the performance of Trimble MX-8. Those extensive tests confirm that Trimble MX-8 offers a very promising tool to survey complex urban areas.
Pontrelli, Giuseppe; De Crescenzo, Franco; Buzzetti, Roberto; Calò Carducci, Francesca; Jenkner, Alessandro; Amodio, Donato; De Luca, Maia; Chiurchiù, Sara; Davies, Elin Haf; Simonetti, Alessandra; Ferretti, Elena; Della Corte, Martina; Gramatica, Luca; Livadiotti, Susanna; Rossi, Paolo
2016-04-27
Differential diagnosis between sepsis and non-infectious inflammatory disorders demands improved biomarkers. Soluble Triggering Receptor Expression on Myeloid cells (sTREM-1) is an activating receptor whose role has been studied throughout the last decade. We performed a systematic review to evaluate the accuracy of plasma sTREM-1 levels in the diagnosis of sepsis in children with Systemic Inflammatory Response Syndrome (SIRS). A literature search of PubMed, Cochrane Central Register of Controlled Trials, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and ISI Web of Knowledge databases was performed using specific search terms. Studies were included if they assessed the diagnostic accuracy of plasma sTREM-1 for sepsis in paediatric patients with SIRS. Data on sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve were extracted. The methodological quality of each study was assessed using a checklist based on the Quality Assessment Tool for Diagnostic Accuracy Studies. Nine studies comprising 961 patients were included, four of which were in newborns, three in children and two in children with febrile neutropenia. Some data from single studies support a role of sTREM-1 as a diagnostic tool in pediatric sepsis, but cannot be considered conclusive, because a quantitative synthesis was not possible, due to heterogeneity in studies design. This systematic review suggests that available data are insufficient to support a role for sTREM in the diagnosis and follow-up of paediatric sepsis.
Mycofier: a new machine learning-based classifier for fungal ITS sequences.
Delgado-Serrano, Luisa; Restrepo, Silvia; Bustos, Jose Ricardo; Zambrano, Maria Mercedes; Anzola, Juan Manuel
2016-08-11
The taxonomic and phylogenetic classification based on sequence analysis of the ITS1 genomic region has become a crucial component of fungal ecology and diversity studies. Nowadays, there is no accurate alignment-free classification tool for fungal ITS1 sequences for large environmental surveys. This study describes the development of a machine learning-based classifier for the taxonomical assignment of fungal ITS1 sequences at the genus level. A fungal ITS1 sequence database was built using curated data. Training and test sets were generated from it. A Naïve Bayesian classifier was built using features from the primary sequence with an accuracy of 87 % in the classification at the genus level. The final model was based on a Naïve Bayes algorithm using ITS1 sequences from 510 fungal genera. This classifier, denoted as Mycofier, provides similar classification accuracy compared to BLASTN, but the database used for the classification contains curated data and the tool, independent of alignment, is more efficient and contributes to the field, given the lack of an accurate classification tool for large data from fungal ITS1 sequences. The software and source code for Mycofier are freely available at https://github.com/ldelgado-serrano/mycofier.git .
Areeckal, A S; Jayasheelan, N; Kamath, J; Zawadynski, S; Kocher, M; David S, S
2018-03-01
We propose an automated low cost tool for early diagnosis of onset of osteoporosis using cortical radiogrammetry and cancellous texture analysis from hand and wrist radiographs. The trained classifier model gives a good performance accuracy in classifying between healthy and low bone mass subjects. We propose a low cost automated diagnostic tool for early diagnosis of reduction in bone mass using cortical radiogrammetry and cancellous texture analysis of hand and wrist radiographs. Reduction in bone mass could lead to osteoporosis, a disease observed to be increasingly occurring at a younger age in recent times. Dual X-ray absorptiometry (DXA), currently used in clinical practice, is expensive and available only in urban areas in India. Therefore, there is a need to develop a low cost diagnostic tool in order to facilitate large-scale screening of people for early diagnosis of osteoporosis at primary health centers. Cortical radiogrammetry from third metacarpal bone shaft and cancellous texture analysis from distal radius are used to detect low bone mass. Cortical bone indices and cancellous features using Gray Level Run Length Matrices and Laws' masks are extracted. A neural network classifier is trained using these features to classify healthy subjects and subjects having low bone mass. In our pilot study, the proposed segmentation method shows 89.9 and 93.5% accuracy in detecting third metacarpal bone shaft and distal radius ROI, respectively. The trained classifier shows training accuracy of 94.3% and test accuracy of 88.5%. An automated diagnostic technique for early diagnosis of onset of osteoporosis is developed using cortical radiogrammetric measurements and cancellous texture analysis of hand and wrist radiographs. The work shows that a combination of cortical and cancellous features improves the diagnostic ability and is a promising low cost tool for early diagnosis of increased risk of osteoporosis.
Kaneta, Tomohiro; Nakatsuka, Masahiro; Nakamura, Kei; Seki, Takashi; Yamaguchi, Satoshi; Tsuboi, Masahiro; Meguro, Kenichi
2016-01-01
SPECT is an important diagnostic tool for dementia. Recently, statistical analysis of SPECT has been commonly used for dementia research. In this study, we evaluated the accuracy of visual SPECT evaluation and/or statistical analysis for the diagnosis (Dx) of Alzheimer disease (AD) and other forms of dementia in our community-based study "The Osaki-Tajiri Project." Eighty-nine consecutive outpatients with dementia were enrolled and underwent brain perfusion SPECT with 99mTc-ECD. Diagnostic accuracy of SPECT was tested using 3 methods: visual inspection (SPECT Dx), automated diagnostic tool using statistical analysis with easy Z-score imaging system (eZIS Dx), and visual inspection plus eZIS (integrated Dx). Integrated Dx showed the highest sensitivity, specificity, and accuracy, whereas eZIS was the second most accurate method. We also observed that a higher than expected rate of SPECT images indicated false-negative cases of AD. Among these, 50% showed hypofrontality and were diagnosed as frontotemporal lobar degeneration. These cases typically showed regional "hot spots" in the primary sensorimotor cortex (ie, a sensorimotor hot spot sign), which we determined were associated with AD rather than frontotemporal lobar degeneration. We concluded that the diagnostic abilities were improved by the integrated use of visual assessment and statistical analysis. In addition, the detection of a sensorimotor hot spot sign was useful to detect AD when hypofrontality is present and improved the ability to properly diagnose AD.
Waller, Anna W; Lotton, Jennifer L; Gaur, Shashank; Andrade, Jeanette M; Andrade, Juan E
2018-06-21
In resource-limited settings, mass food fortification is a common strategy to ensure the population consumes appropriate quantities of essential micronutrients. Food and government organizations in these settings, however, lack tools to monitor the quality and compliance of fortified products and their efficacy to enhance nutrient status. The World Health Organization has developed general guidelines known as ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, and Deliverable to end-users) to aid the development of useful diagnostic tools for these settings. These guidelines assume performance aspects such as sufficient accuracy, reliability, and validity. The purpose of this systematic narrative review is to examine the micronutrient sensor literature on its adherence towards the ASSURED criteria along with accuracy, reliability, and validation when developing micronutrient sensors for resource-limited settings. Keyword searches were conducted in three databases: Web of Science, PubMed, and Scopus and were based on 6-point inclusion criteria. A 16-question quality assessment tool was developed to determine the adherence towards quality and performance criteria. Of the 2,365 retrieved studies, 42 sensors were included based on inclusion/exclusion criteria. Results showed that improvements to the current sensor design are necessary, especially their affordability, user-friendliness, robustness, equipment-free, and deliverability within the ASSURED criteria, and accuracy and validity of the additional criteria to be useful in resource-limited settings. Although it requires further validation, the 16-question quality assessment tool can be used as a guide in the development of sensors for resource-limited settings. © 2018 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Validation of an explanatory tool for data-fused displays for high-technology future aircraft
NASA Astrophysics Data System (ADS)
Fletcher, Georgina C. L.; Shanks, Craig R.; Selcon, Stephen J.
1996-05-01
As the number of sensor and data sources in the military cockpit increases, pilots will suffer high levels of workload which could result in reduced performance and the loss of situational awareness. A DRA research program has been investigating the use of data-fused displays in decision support and has developed and laboratory-tested an explanatory tool for displaying information in air combat scenarios. The tool has been designed to provide pictorial explanations of data that maintain situational awareness by involving the pilot in the hostile aircraft threat assessment task. This paper reports a study carried out to validate the success of the explanatory tool in a realistic flight simulation facility. Aircrew were asked to perform a threat assessment task, either with or without the explanatory tool providing information in the form of missile launch success zone envelopes, while concurrently flying a waypoint course within set flight parameters. The results showed that there was a significant improvement (p less than 0.01) in threat assessment accuracy of 30% when using the explanatory tool. This threat assessment performance advantage was achieved without a trade-off with flying task performance. Situational awareness measures showed no general differences between the explanatory and control conditions, but significant learning effects suggested that the explanatory tool makes the task initially more intuitive and hence less demanding on the pilots' attentional resources. The paper concludes that DRA's data-fused explanatory tool is successful at improving threat assessment accuracy in a realistic simulated flying environment, and briefly discusses the requirements for further research in the area.
Analysis of nutrition judgments using the Nutrition Facts Panel.
González-Vallejo, Claudia; Lavins, Bethany D; Carter, Kristina A
2016-10-01
Consumers' judgments and choices of the nutritional value of food products (cereals and snacks) were studied as a function of using information in the Nutrition Facts Panel (NFP, National Labeling and Education Act, 1990). Brunswik's lens model (Brunswik, 1955; Cooksey, 1996; Hammond, 1955; Stewart, 1988) served as the theoretical and analytical tool for examining the judgment process. Lens model analysis was further enriched with the criticality of predictors' technique developed by Azen, Budescu, & Reiser (2001). Judgment accuracy was defined as correspondence between consumers' judgments and the nutritional quality index, NuVal(®), obtained from an expert system. The study also examined several individual level variables (e.g., age, gender, BMI, educational level, health status, health beliefs, etc.) as predictors of lens model indices that measure judgment consistency, judgment accuracy, and knowledge of the environment. Results showed varying levels of consistency and accuracy depending on the food product, but generally the median values of the lens model statistics were moderate. Judgment consistency was higher for more educated individuals; judgment accuracy was predicted from a combination of person level characteristics, and individuals who reported having regular meals had models that were in greater agreement with the expert's model. Lens model methodology is a useful tool for understanding how individuals perceive the nutrition in foods based on the NFP label. Lens model judgment indices were generally low, highlighting that the benefits of the complex NFP label may be more modest than what has been previously assumed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advanced Neutronics Tools for BWR Design Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Hfaiedh, N.; Letellier, R.
2006-07-01
This paper summarizes the developments implemented in the new APOLLO2.8 neutronics tool to meet the required target accuracy in LWR applications, particularly void effects and pin-by-pin power map in BWRs. The Method Of Characteristics was developed to allow efficient LWR assembly calculations in 2D-exact heterogeneous geometry; resonant reaction calculation was improved by the optimized SHEM-281 group mesh, which avoids resonance self-shielding approximation below 23 eV, and the new space-dependent method for resonant mixture that accounts for resonance overlapping. Furthermore, a new library CEA2005, processed from JEFF3.1 evaluations involving feedback from Critical Experiments and LWR P.I.E, is used. The specific '2005-2007more » BWR Plan' settled to demonstrate the validation/qualification of this neutronics tool is described. Some results from the validation process are presented: the comparison of APOLLO2.8 results to reference Monte Carlo TRIPOLI4 results on specific BWR benchmarks emphasizes the ability of the deterministic tool to calculate BWR assembly multiplication factor within 200 pcm accuracy for void fraction varying from 0 to 100%. The qualification process against the BASALA mock-up experiment stresses APOLLO2.8/CEA2005 performances: pin-by-pin power is always predicted within 2% accuracy, reactivity worth of B4C or Hf cruciform control blade, as well as Gd pins, is predicted within 1.2% accuracy. (authors)« less
OM300 Direction Drilling Module
MacGugan, Doug
2013-08-22
OM300 – Geothermal Direction Drilling Navigation Tool: Design and produce a prototype directional drilling navigation tool capable of high temperature operation in geothermal drilling Accuracies of 0.1° Inclination and Tool Face, 0.5° Azimuth Environmental Ruggedness typical of existing oil/gas drilling Multiple Selectable Sensor Ranges High accuracy for navigation, low bandwidth High G-range & bandwidth for Stick-Slip and Chirp detection Selectable serial data communications Reduce cost of drilling in high temperature Geothermal reservoirs Innovative aspects of project Honeywell MEMS* Vibrating Beam Accelerometers (VBA) APS Flux-gate Magnetometers Honeywell Silicon-On-Insulator (SOI) High-temperature electronics Rugged High-temperature capable package and assembly process
Dias, Filipi Leles da Costa; Teixeira, Antônio Lúcio; Guimarães, Henrique Cerqueira; Barbosa, Maira Tonidandel; Resende, Elisa de Paula França; Beato, Rogério Gomes; Carmona, Karoline Carvalho; Caramelli, Paulo
2017-01-01
Late-life depression (LLD) is common, but remains underdiagnosed. Validated screening tools for use with the oldest-old in clinical practice are still lacking, particularly in developing countries. To evaluate the accuracy of a screening tool for LLD in a community-dwelling oldest-old sample. We evaluated 457 community-dwelling elderly subjects, aged ≥75 years and without dementia, with the Geriatric Depression Scale (GDS-15). Depression diagnosis was established according to DSM-IV criteria following a structured psychiatric interview with the Mini International Neuropsychiatric Interview (MINI). Fifty-two individuals (11.4%) were diagnosed with major depression. The area under the receiver operating characteristic (ROC) curve was 0.908 (p<0.001). Using a cut-off score of 5/6 (not depressed/depressed), 84 (18.4%) subjects were considered depressed by the GDS-15 (kappa coefficient = 53.8%, p<0.001). The 4/5 cut-off point achieved the best combination of sensitivity (86.5%) and specificity (82.7%) (Youden's index = 0.692), with robust negative (0.9802) and reasonable positive predictive values (0.3819). GDS-15 showed good accuracy as a screening tool for major depression in this community-based sample of low-educated oldest-old individuals. Our findings support the use of the 4/5 cut-off score, which showed the best diagnostic capacity.
Accurate perceptions do not need complete information to reflect reality.
Mousavi, Shabnam; Funder, David C
2017-01-01
Social reality of a group emerges from interpersonal perceptions and beliefs put to action under a host of environmental conditions. By extending the study of fast-and-frugal heuristics, we view social perceptions as judgment tools and assert that perceptions are ecologically rational to the degree that they adapt to the social reality. We maintain that the veracity of both stereotypes and base rates, as judgment tools, can be determined solely by accuracy research.
Skin Testing for Allergic Rhinitis: A Health Technology Assessment
Kabali, Conrad; Chan, Brian; Higgins, Caroline; Holubowich, Corinne
2016-01-01
Background Allergic rhinitis is the most common type of allergy worldwide. The accuracy of skin testing for allergic rhinitis is still debated. This health technology assessment had two objectives: to determine the diagnostic accuracy of skin-prick and intradermal testing in patients with suspected allergic rhinitis and to estimate the costs to the Ontario health system of skin testing for allergic rhinitis. Methods We searched All Ovid MEDLINE, Embase, and Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, CRD Health Technology Assessment Database, Cochrane Central Register of Controlled Trials, and NHS Economic Evaluation Database for studies that evaluated the diagnostic accuracy of skin-prick and intradermal testing for allergic rhinitis using nasal provocation as the reference standard. For the clinical evidence review, data extraction and quality assessment were performed using the QUADAS-2 tool. We used the bivariate random-effects model for meta-analysis. For the economic evidence review, we assessed studies using a modified checklist developed by the (United Kingdom) National Institute for Health and Care Excellence. We estimated the annual cost of skin testing for allergic rhinitis in Ontario for 2015 to 2017 using provincial data on testing volumes and costs. Results We meta-analyzed seven studies with a total of 430 patients that assessed the accuracy of skin-prick testing. The pooled pair of sensitivity and specificity for skin-prick testing was 85% and 77%, respectively. We did not perform a meta-analysis for the diagnostic accuracy of intradermal testing due to the small number of studies (n = 4). Of these, two evaluated the accuracy of intradermal testing in confirming negative skin-prick testing results, with sensitivity ranging from 27% to 50% and specificity ranging from 60% to 100%. The other two studies evaluated the accuracy of intradermal testing as a stand-alone tool for diagnosing allergic rhinitis, with sensitivity ranging from 60% to 79% and specificity ranging from 68% to 69%. We estimated the budget impact of continuing to publicly fund skin testing for allergic rhinitis in Ontario to be between $2.5 million and $3.0 million per year. Conclusions Skin-prick testing is moderately accurate in identifying subjects with or without allergic rhinitis. The diagnostic accuracy of intradermal testing could not be well established from this review. Our best estimate is that publicly funding skin testing for allergic rhinitis costs the Ontario government approximately $2.5 million to $3.0 million per year. PMID:27279928
Skin Testing for Allergic Rhinitis: A Health Technology Assessment.
2016-01-01
Allergic rhinitis is the most common type of allergy worldwide. The accuracy of skin testing for allergic rhinitis is still debated. This health technology assessment had two objectives: to determine the diagnostic accuracy of skin-prick and intradermal testing in patients with suspected allergic rhinitis and to estimate the costs to the Ontario health system of skin testing for allergic rhinitis. We searched All Ovid MEDLINE, Embase, and Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, CRD Health Technology Assessment Database, Cochrane Central Register of Controlled Trials, and NHS Economic Evaluation Database for studies that evaluated the diagnostic accuracy of skin-prick and intradermal testing for allergic rhinitis using nasal provocation as the reference standard. For the clinical evidence review, data extraction and quality assessment were performed using the QUADAS-2 tool. We used the bivariate random-effects model for meta-analysis. For the economic evidence review, we assessed studies using a modified checklist developed by the (United Kingdom) National Institute for Health and Care Excellence. We estimated the annual cost of skin testing for allergic rhinitis in Ontario for 2015 to 2017 using provincial data on testing volumes and costs. We meta-analyzed seven studies with a total of 430 patients that assessed the accuracy of skin-prick testing. The pooled pair of sensitivity and specificity for skin-prick testing was 85% and 77%, respectively. We did not perform a meta-analysis for the diagnostic accuracy of intradermal testing due to the small number of studies (n = 4). Of these, two evaluated the accuracy of intradermal testing in confirming negative skin-prick testing results, with sensitivity ranging from 27% to 50% and specificity ranging from 60% to 100%. The other two studies evaluated the accuracy of intradermal testing as a stand-alone tool for diagnosing allergic rhinitis, with sensitivity ranging from 60% to 79% and specificity ranging from 68% to 69%. We estimated the budget impact of continuing to publicly fund skin testing for allergic rhinitis in Ontario to be between $2.5 million and $3.0 million per year. Skin-prick testing is moderately accurate in identifying subjects with or without allergic rhinitis. The diagnostic accuracy of intradermal testing could not be well established from this review. Our best estimate is that publicly funding skin testing for allergic rhinitis costs the Ontario government approximately $2.5 million to $3.0 million per year.
A novel technique for micro-hole forming on skull with the assistance of ultrasonic vibration.
Li, Zhe; Yang, Daoguo; Hao, Weidong; Wu, Tiecheng; Wu, Song; Li, Xiaoping
2016-04-01
Micro-hole opening on skull is technically challenging and is hard to realize by micro-drilling. Low-stiffness of the drill bit is a serious drawback in micro-drilling. To deal with this problem, a novel ultrasonic vibration assisted micro-hole forming technique has been developed. Tip geometry and vibration amplitude are two key factors affecting the performance of this hole forming technique. To investigate their effects, experiment was carried out with 300μm diameter tools of three different tip geometries at three different vibration amplitudes. Hole forming performance was evaluated by the required thrust force, dimensional accuracy, exit burr and micro-structure of bone tissue around the generated hole. Based on the findings from current study, the 60° conically tipped tool helps generate a micro-hole of better quality at a smaller thrust force, and it is more suitable for hole forming than the 120° conically tipped tool and the blunt tipped tool. As for the vibration amplitude, when a larger amplitude is used, a micro-hole of better quality and higher dimensional accuracy can be formed at a smaller thrust force. Findings from this study would lay a technical foundation for accurately generating a high-quality micro-hole on skull, which enables minimally invasive insertion of a microelectrode into brain for neural activity measuring. Copyright © 2015 Elsevier Ltd. All rights reserved.
Song, Jae W.; Kim, Hyungjin Myra; Bellfi, Lillian T.; Chung, Kevin C.
2010-01-01
Background All silicone breast implant recipients are recommended by the US Food and Drug Administration to undergo serial screening to detect implant rupture with magnetic resonance imaging (MRI). We performed a systematic review of the literature to assess the quality of diagnostic accuracy studies utilizing MRI or ultrasound to detect silicone breast implant rupture and conducted a meta-analysis to examine the effect of study design biases on the estimation of MRI diagnostic accuracy measures. Method Studies investigating the diagnostic accuracy of MRI and ultrasound in evaluating ruptured silicone breast implants were identified using MEDLINE, EMBASE, ISI Web of Science, and Cochrane library databases. Two reviewers independently screened potential studies for inclusion and extracted data. Study design biases were assessed using the QUADAS tool and the STARDS checklist. Meta-analyses estimated the influence of biases on diagnostic odds ratios. Results Among 1175 identified articles, 21 met the inclusion criteria. Most studies using MRI (n= 10 of 16) and ultrasound (n=10 of 13) examined symptomatic subjects. Meta-analyses revealed that MRI studies evaluating symptomatic subjects had 14-fold higher diagnostic accuracy estimates compared to studies using an asymptomatic sample (RDOR 13.8; 95% CI 1.83–104.6) and 2-fold higher diagnostic accuracy estimates compared to studies using a screening sample (RDOR 1.89; 95% CI 0.05–75.7). Conclusion Many of the published studies utilizing MRI or ultrasound to detect silicone breast implant rupture are flawed with methodological biases. These methodological shortcomings may result in overestimated MRI diagnostic accuracy measures and should be interpreted with caution when applying the data to a screening population. PMID:21364405
Research of a smart cutting tool based on MEMS strain gauge
NASA Astrophysics Data System (ADS)
Zhao, Y.; Zhao, Y. L.; Shao, YW; Hu, T. J.; Zhang, Q.; Ge, X. H.
2018-03-01
Cutting force is an important factor that affects machining accuracy, cutting vibration and tool wear. Machining condition monitoring by cutting force measurement is a key technology for intelligent manufacture. Current cutting force sensors exist problems of large volume, complex structure and poor compatibility in practical application, for these problems, a smart cutting tool is proposed in this paper for cutting force measurement. Commercial MEMS (Micro-Electro-Mechanical System) strain gauges with high sensitivity and small size are adopted as transducing element of the smart tool, and a structure optimized cutting tool is fabricated for MEMS strain gauge bonding. Static calibration results show that the developed smart cutting tool is able to measure cutting forces in both X and Y directions, and the cross-interference error is within 3%. Its general accuracy is 3.35% and 3.27% in X and Y directions, and sensitivity is 0.1 mV/N, which is very suitable for measuring small cutting forces in high speed and precision machining. The smart cutting tool is portable and reliable for practical application in CNC machine tool.
Comparison of physical and semi-empirical hydraulic models for flood inundation mapping
NASA Astrophysics Data System (ADS)
Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.
2016-12-01
Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.
das Dores Graciano Silva, Maria; Martins, Maria Auxiliadora Parreiras; de Gouvêa Viana, Luciana; Passaglia, Luiz Guilherme; de Menezes, Renata Rezende; de Queiroz Oliveira, João Antonio; da Silva, Jose Luiz Padilha; Ribeiro, Antonio Luiz Pinho
2018-06-06
Adverse drug events (ADEs) can seriously compromise the safety and quality of care provided to hospitalized patients, requiring the adoption of accurate methods to monitor them. We sought to prospectively evaluate the accuracy of the triggers proposed by the Institute for Healthcare Improvement (IHI) for identifying ADEs. A prospective study was conducted in a public university hospital, in 2015, with patients ≥18 years. Triggers proposed by IHI and clinical alterations suspected to be ADEs were searched daily. The number of days in which the patient was hospitalized was considered as unit of measure to evaluate the accuracy of each trigger. Three hundred patients were included in this study. Mean age was 56.3 years (standard deviation (SD) 16.0), and 154 (51.3%) were female. The frequency of patients with ADEs was 24.7% and with at least one trigger was 53.3%. From those patients who had at least one trigger, the most frequent triggers were antiemetics (57.5%) and "abrupt medication stop" (31.8%). Triggers' sensitivity ranged from 0.3 to11.8 % and the positive predictive value ranged from 1.2 to 27.3%. Specificity and negative predictive value were greater than 86%. Most patients identified by the presence of triggers did not have ADEs (64.4%). No triggers were identified in 40 (38.5%) ADEs. IHI Trigger Tool did not show good accuracy in detecting ADEs in this prospective study. The adoption of combined strategies could enhance effectiveness in identifying patient safety flaws. Further discussion might contribute to improve trigger usefulness in clinical practice. This article is protected by copyright. All rights reserved.
Evaluating online diagnostic decision support tools for the clinical setting.
Pryor, Marie; White, David; Potter, Bronwyn; Traill, Roger
2012-01-01
Clinical decision support tools available at the point of care are an effective adjunct to support clinicians to make clinical decisions and improve patient outcomes. We developed a methodology and applied it to evaluate commercially available online clinical diagnostic decision support (DDS) tools for use at the point of care. We identified 11 commercially available DDS tools and assessed these against an evaluation instrument that included 6 categories; general information, content, quality control, search, clinical results and other features. We developed diagnostically challenging clinical case scenarios based on real patient experience that were commonly missed by junior medical staff. The evaluation was divided into 2 phases; an initial evaluation of all identified and accessible DDS tools conducted by the Clinical Information Access Portal (CIAP) team and a second phase that further assessed the top 3 tools identified in the initial evaluation phase. An evaluation panel consisting of senior and junior medical clinicians from NSW Health conducted the second phase. Of the eleven tools that were assessed against the evaluation instrument only 4 tools completely met the DDS definition that was adopted for this evaluation and were able to produce a differential diagnosis. From the initial phase of the evaluation 4 DDS tools scored 70% or more (maximum score 96%) for the content category, 8 tools scored 65% or more (maximum 100%) for the quality control category, 5 tools scored 65% or more (maximum 94%) for the search category, and 4 tools score 70% or more (maximum 81%) for the clinical results category. The second phase of the evaluation was focused on assessing diagnostic accuracy for the top 3 tools identified in the initial phase. Best Practice ranked highest overall against the 6 clinical case scenarios used. Overall the differentiating factor between the top 3 DDS tools was determined by diagnostic accuracy ranking, ease of use and the confidence and credibility of the clinical information. The evaluation methodology used here to assess the quality and comprehensiveness of clinical DDS tools was effective in identifying the most appropriate tool for the clinical setting. The use of clinical case scenarios is fundamental in determining the diagnostic accuracy and usability of the tools.
Lee, Joshua K; Nordahl, Christine W; Amaral, David G; Lee, Aaron; Solomon, Marjorie; Ghetti, Simona
2015-11-01
Volumetric assessments of the hippocampus and other brain structures during childhood provide useful indices of brain development and correlates of cognitive functioning in typically and atypically developing children. Automated methods such as FreeSurfer promise efficient and replicable segmentation, but may include errors which are avoided by trained manual tracers. A recently devised automated correction tool that uses a machine learning algorithm to remove systematic errors, the Automatic Segmentation Adapter Tool (ASAT), was capable of substantially improving the accuracy of FreeSurfer segmentations in an adult sample [Wang et al., 2011], but the utility of ASAT has not been examined in pediatric samples. In Study 1, the validity of FreeSurfer and ASAT corrected hippocampal segmentations were examined in 20 typically developing children and 20 children with autism spectrum disorder aged 2 and 3 years. We showed that while neither FreeSurfer nor ASAT accuracy differed by disorder or age, the accuracy of ASAT corrected segmentations were substantially better than FreeSurfer segmentations in every case, using as few as 10 training examples. In Study 2, we applied ASAT to 89 typically developing children aged 2 to 4 years to examine relations between hippocampal volume, age, sex, and expressive language. Girls had smaller hippocampi overall, and in left hippocampus this difference was larger in older than younger girls. Expressive language ability was greater in older children, and this difference was larger in those with larger hippocampi, bilaterally. Overall, this research shows that ASAT is highly reliable and useful to examinations relating behavior to hippocampal structure. © 2015 Wiley Periodicals, Inc.
Radiological interpretation of images displayed on tablet computers: a systematic review
Armfield, N R; Smith, A C
2015-01-01
Objective: To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. Methods: We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Results: 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad® (Apple, Cupertino, CA). The included studies reported high sensitivity (84–98%), specificity (74–100%) and accuracy rates (98–100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Conclusion: Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. Advances in knowledge: The iPad may be appropriate for an on-call radiologist to use for radiological interpretation. PMID:25882691
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
Accuracy of a Digital Weight Scale Relative to the Nintendo Wii in Measuring Limb Load Asymmetry
Kumar, NS Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah
2014-01-01
[Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry. PMID:25202181
Accuracy of a digital weight scale relative to the nintendo wii in measuring limb load asymmetry.
Kumar, Ns Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah
2014-08-01
[Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry.
Increasing Army Supply Chain Performance: Using an Integrated End to End Metrics System
2017-01-01
Sched Deliver Sched Delinquent Contracts Current Metrics PQDR/SDRs Forecasting Accuracy Reliability Demand Management Asset Mgmt Strategies Pipeline...are identified and characterized by statistical analysis. The study proposed a framework and tool for inventory management based on factors such as
Lung ultrasound in diagnosing pneumonia in childhood: a systematic review and meta-analysis.
Orso, Daniele; Ban, Alessio; Guglielmo, Nicola
2018-06-21
Pneumonia is the third leading cause of death in children under 5 years of age worldwide. In pediatrics, both the accuracy and safety of diagnostic tools are important. Lung ultrasound (LUS) could be a safe diagnostic tool for this reason. We searched in the literature for diagnostic studies about LUS to predict pneumonia in pediatric patients using systematic review and meta-analysis. The Medline, CINAHL, Cochrane Library, Embase, SPORTDiscus, ScienceDirect, and Web of Science databases from inception to September 2017 were searched. All studies that evaluated the diagnostic accuracy of LUS in determining the presence of pneumonia in patients under 18 years of age were included. 1042 articles were found by systematic search. 76 articles were assessed for eligibility. Seventeen studies were included in the systematic review. We included 2612 pooled cases. The age of the pooled sample population ranged from 0 to about 21 years old. Summary sensitivity, specificity, and AUC were 0.94 (IQR: 0.89-0.97), 0.93 (IQR: 0.86-0.98), and 0.98 (IQR: 0.94-0.99), respectively. No agreement on reference standard was detected: nine studies used chest X-rays, while four studies considered the clinical diagnosis. Only one study used computed tomography. LUS seems to be a promise tool for diagnosing pneumonia in children. However, the high heterogeneity found across the individual studies, and the absence of a reliable reference standard, make the finding questionable. More methodologically rigorous studies are needed.
NASA Astrophysics Data System (ADS)
Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter
2016-10-01
Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.
A new software for prediction of femoral neck fractures.
Testi, Debora; Cappello, Angelo; Sgallari, Fiorella; Rumpf, Martin; Viceconti, Marco
2004-08-01
Femoral neck fractures are an important clinical, social and economic problem. Even if many different attempts have been carried out to improve the accuracy predicting the fracture risk, it was demonstrated in retrospective studies that the standard clinical protocol achieves an accuracy of about 65%. A new procedure was developed including for the prediction not only bone mineral density but also geometric and femoral strength information and achieving an accuracy of about 80% in a previous retrospective study. Aim of the present work was to re-engineer research-based procedures and develop a real-time software for the prediction of the risk for femoral fracture. The result was efficient, repeatable and easy to use software for the evaluation of the femoral neck fracture risk to be inserted in the daily clinical practice providing a useful tool for the improvement of fracture prediction.
Using the red/yellow/green discharge tool to improve the timeliness of hospital discharges.
Mathews, Kusum S; Corso, Philip; Bacon, Sandra; Jenq, Grace Y
2014-06-01
As part of Yale-New Haven Hospital (Connecticut)'s Safe Patient Flow Initiative, the physician leadership developed the Red/Yellow/Green (RYG) Discharge Tool, an electronic medical record-based prompt to identify likelihood of patients' next-day discharge: green (very likely), yellow (possibly), and red (unlikely). The tool's purpose was to enhance communication with nursing/care coordination and trigger earlier discharge steps for patients identified as "green" or "yellow." Data on discharge assignments, discharge dates/ times, and team designation were collected for all adult medicine patients discharged in October-December 2009 (Study Period 1) and October-December 2011 (Study Period 2), between which the tool's placement changed from the sign-out note to the daily progress note. In Study Period 1, 75.9% of the patients had discharge assignments, compared with 90.8% in Period 2 (p < .001). The overall 11 A.M. discharge rate improved from 10.4% to 21.2% from 2007 to 2011. "Green" patients were more likely to be discharged before 11 A.M. than "yellow" or "red" patients (p < .001). Patients with RYG assignments discharged by 11 A.M. had a lower length of stay than those without assignments and did not have an associated increased risk of readmission. Discharge prediction accuracy worsened after the change in placement, decreasing from 75.1% to 59.1% for "green" patients (p < .001), and from 34.5% to 29.2% (p < .001) for "yellow" patients. In both periods, hospitalists were more accurate than house staff in discharge predictions, suggesting that education and/or experience may contribute to discharge assignment. The RYG Discharge Tool helped facilitate earlier discharges, but accuracy depends on placement in daily work flow and experience.
Technology of machine tools. Volume 5. Machine tool accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hocken, R.J.
1980-10-01
The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
NASA Astrophysics Data System (ADS)
Kavungal, Vishnu; Farrell, Gerald; Wu, Qiang; Kumar Mallik, Arun; Semenova, Yuliya
2018-03-01
This paper experimentally demonstrates a method for geometrical profiling of asymmetries in fabricated thin microfiber tapers with waist diameters ranging from ∼10 to ∼50 μm with submicron accuracy. The method is based on the analysis of whispering gallery mode resonances excited in cylindrical fiber resonators as a result of evanescent coupling of light propagating through the fiber taper. The submicron accuracy of the proposed method has been verified by SEM studies. The method can be applied as a quality control tool in fabrication of microfiber based devices and sensors or for fine-tuning of microfiber fabrication set-ups.
Palma, JP; Sharek, PJ; Longhurst, CA
2016-01-01
Objective To evaluate the impact of integrating a handoff tool into the electronic medical record (EMR) on sign-out accuracy, satisfaction and workflow in a neonatal intensive care unit (NICU). Study Design Prospective surveys of neonatal care providers in an academic children’s hospital 1 month before and 6 months following EMR integration of a standalone Microsoft Access neonatal handoff tool. Result Providers perceived sign-out information to be somewhat or very accurate at a rate of 78% with the standalone handoff tool and 91% with the EMR-integrated tool (P < 0.01). Before integration of neonatal sign-out into the EMR, 35% of providers were satisfied with the process of updating sign-out information and 71% were satisfied with the printed sign-out document; following EMR integration, 92% of providers were satisfied with the process of updating sign-out information (P < 0.01) and 98% were satisfied with the printed sign-out document (P < 0.01). Neonatal care providers reported spending a median of 11 to 15 min/day updating the standalone sign-out and 16 to 20 min/day updating the EMR-integrated sign-out (P = 0.026). The median percentage of total sign-out preparation time dedicated to transcribing information from the EMR was 25 to 49% before and <25% after EMR integration of the handoff tool (P < 0.01). Conclusion Integration of a NICU-specific handoff tool into an EMR resulted in improvements in perceived sign-out accuracy, provider satisfaction and at least one aspect of workflow. PMID:21273990
Detecting anxiety in individuals with Parkinson disease: A systematic review.
Mele, Bria; Holroyd-Leduc, Jayna; Smith, Eric E; Pringsheim, Tamara; Ismail, Zahinoor; Goodarzi, Zahra
2018-01-02
To examine diagnostic accuracy of anxiety detection tools compared with a gold standard in outpatient settings among adults with Parkinson disease (PD). A systematic review was conducted. MEDLINE, EMABASE, PsycINFO, and Cochrane Database of Systematic Reviews were searched to April 7, 2017. Prevalence of anxiety and diagnostic accuracy measures including sensitivity, specificity, and likelihood ratios were gathered. Pooled prevalence of anxiety was calculated using Mantel-Haenszel-weighted DerSimonian and Laird models. A total of 6,300 citations were reviewed with 6 full-text articles included for synthesis. Tools included within this study were the Beck Anxiety Inventory, Geriatric Anxiety Inventory (GAI), Hamilton Anxiety Rating Scale, Hospital Anxiety and Depression Scale-Anxiety, Parkinson's Anxiety Scale (PAS), and Mini-Social Phobia Inventory. Anxiety diagnoses made included generalized anxiety disorder, social phobia, and any anxiety type. Pooled prevalence of anxiety was 30.1% (95% confidence interval 26.1%-34.0%). The GAI had the best-reported sensitivity of 0.86 and specificity of 0.88. The observer-rated PAS had a sensitivity of 0.71 and the highest specificity of 0.91. While there are 6 tools validated for anxiety screening in PD populations, most tools are only validated in single studies. The GAI is brief and easy to use, with a good balance of sensitivity and specificity. The PAS was specifically developed for PD, is brief, and has self-/observer-rated scales, but with lower sensitivity. Health care practitioners involved in PD care need to be aware of available validated tools and choose one that fits their practice. Copyright © 2017 American Academy of Neurology.
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
Behairy, Noha H.; Dorgham, Mohsen A.
2008-01-01
The aim of this study was to detect the accuracy of routine magnetic resonance imaging (MRI) done in different centres and its agreement with arthroscopy in meniscal and ligamentous injuries of the knee. We prospectively examined 70 patients ranging in age between 22 and 59 years. History taking, plain X-ray, clinical examination, routine MRI and arthroscopy were done for all patients. Sensitivity, specificity, accuracy, positive and negative predictive values, P value and kappa agreement measures were calculated. We found a sensitivity of 47 and 100%, specificity of 95 and 75% and accuracy of 73 and 78.5%, respectively, for the medial and lateral meniscus. A sensitivity of 77.8%, specificity of 100% and accuracy of 94% was noted for the anterior cruciate ligament (ACL). We found good kappa agreements (0.43 and 0.45) for both menisci and excellent agreement (0.84) for the ACL. MRI shows high accuracy and should be used as the primary diagnostic tool for selection of candidates for arthroscopy. Level of evidence: 4. PMID:18506445
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
seXY: a tool for sex inference from genotype arrays.
Qian, David C; Busam, Jonathan A; Xiao, Xiangjun; O'Mara, Tracy A; Eeles, Rosalind A; Schumacher, Frederick R; Phelan, Catherine M; Amos, Christopher I
2017-02-15
Checking concordance between reported sex and genotype-inferred sex is a crucial quality control measure in genome-wide association studies (GWAS). However, limited insights exist regarding the true accuracy of software that infer sex from genotype array data. We present seXY, a logistic regression model trained on both X chromosome heterozygosity and Y chromosome missingness, that consistently demonstrated >99.5% sex inference accuracy in cross-validation for 889 males and 5,361 females enrolled in prostate cancer and ovarian cancer GWAS. Compared to PLINK, one of the most popular tools for sex inference in GWAS that assesses only X chromosome heterozygosity, seXY achieved marginally better male classification and 3% more accurate female classification. https://github.com/Christopher-Amos-Lab/seXY. Christopher.I.Amos@dartmouth.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Lewis, Thomas L; Fothergill, Rachael T; Karthikesalingam, Alan
2016-10-24
Rupture of an abdominal aortic aneurysm (rAAA) carries a considerable mortality rate and is often fatal. rAAA can be treated through open or endovascular surgical intervention and it is possible that more rapid access to definitive intervention might be a key aspect of improving mortality for rAAA. Diagnosis is not always straightforward with up to 42% of rAAA initially misdiagnosed, introducing potentially harmful delay. There is a need for an effective clinical decision support tool for accurate prehospital diagnosis and triage to enable transfer to an appropriate centre. Prospective multicentre observational study assessing the diagnostic accuracy of a prehospital smartphone triage tool for detection of rAAA. The study will be conducted across London in conjunction with London Ambulance Service (LAS). A logistic score predicting the risk of rAAA by assessing ten key parameters was developed and retrospectively validated through logistic regression analysis of ambulance records and Hospital Episode Statistics data for 2200 patients from 2005 to 2010. The triage tool is integrated into a secure mobile app for major smartphone platforms. Key parameters collected from the app will be retrospectively matched with final hospital discharge diagnosis for each patient encounter. The primary outcome is to assess the sensitivity, specificity and positive predictive value of the rAAA triage tool logistic score in prospective use as a mob app for prehospital ambulance clinicians. Data collection started in November 2014 and the study will recruit a minimum of 1150 non-consecutive patients over a time period of 2 years. Full ethical approval has been gained for this study. The results of this study will be disseminated in peer-reviewed publications, and international/national presentations. CPMS 16459; pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Wiebking, Ulrich; Pacha, Tarek Omar; Jagodzinski, Michael
2015-03-01
Ankle sprain injuries, often due to lateral ligamentous injury, are the most common sports traumatology conditions. Correct diagnoses require an understanding of the assessment tools with a high degree of diagnostic accuracy. Obviously, there are still no clear consensuses or standard methods to differentiate between a ligament tear and an ankle sprain. In addition to clinical assessments, stress sonography, arthrometer and other methods are often performed simultaneously. These methods are often costly, however, and their accuracy is controversial. The aim of this study was to investigate three different measurement tools that can be used after a lateral ligament lesion of the ankle with injury of the anterior talofibular ligament to determine their diagnostic accuracy. Thirty patients were recruited for this study. The mean patient age was 35±14 years. There were 15 patients with a ligamentous rupture and 15 patients with an ankle sprain. We quantified two devices and one clinical assessment by which we calculated the sensitivity and specifity: Stress sonography according to Hoffmann, an arthrometer to investigate the 100N talar drawer and maximum manual testing and the clinical assessment of the anterior drawer test. A high resolution sonography was used as the gold standard. The ultrasound-assisted gadgetry according to Hoffmann, with a 3mm cut-off value, displayed a sensitivity of 0.27 and a specificity of 0.87. Using a 3.95mm cut-off value, the arthrometer displayed a sensitivity of 0.8 and a specificity of 0.4. The clinical investigation sensitivities and specificities were 0.93 and 0.67, respectively. Different assessment methods for ankle rupture diagnoses are suggested in the literature; however, these methods lack reliable data to set investigation standards. Clinical examination under adequate analgesia seems to remains the most reliable tool to investigate ligamentous ankle lesions. Further clinical studies with higher case numbers are necessary, however, to evaluate these findings and to measure the reliability. Copyright © 2014 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra
2016-06-01
Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.
Are general surgeons able to accurately self-assess their level of technical skills?
Rizan, C; Ansell, J; Tilston, T W; Warren, N; Torkington, J
2015-11-01
Self-assessment is a way of improving technical capabilities without the need for trainer feedback. It can identify areas for improvement and promote professional medical development. The aim of this review was to identify whether self-assessment is an accurate form of technical skills appraisal in general surgery. The PubMed, MEDLINE(®), Embase(™) and Cochrane databases were searched for studies assessing the reliability of self-assessment of technical skills in general surgery. For each study, we recorded the skills assessed and the evaluation methods used. Common endpoints between studies were compared to provide recommendations based on the levels of evidence. Twelve studies met the inclusion criteria from 22,292 initial papers. There was no level 1 evidence published. All papers compared the correlation between self-appraisal versus an expert score but differed in the technical skills assessment and the evaluation tools used. The accuracy of self-assessment improved with increasing experience (level 2 recommendation), age (level 3 recommendation) and the use of video playback (level 3 recommendation). Accuracy was reduced by stressful learning environments (level 2 recommendation), lack of familiarity with assessment tools (level 3 recommendation) and in advanced surgical procedures (level 3 recommendation). Evidence exists to support the reliability of self-assessment of technical skills in general surgery. Several variables have been shown to affect the accuracy of self-assessment of technical skills. Future work should focus on evaluating the reliability of self-assessment during live operating procedures.
Guettier, Jean-Marc; Kam, Anthony; Chang, Richard; Skarulis, Monica C; Cochran, Craig; Alexander, H Richard; Libutti, Steven K; Pingpank, James F; Gorden, Phillip
2009-04-01
Selective intraarterial calcium injection of the major pancreatic arteries with hepatic venous sampling [calcium arterial stimulation (CaStim)] has been used as a localizing tool for insulinomas at the National Institutes of Health (NIH) since 1989. The accuracy of this technique for localizing insulinomas was reported for all cases until 1996. The aim of the study was to assess the accuracy and track record of the CaStim over time and in the context of evolving technology and to review issues related to result interpretation and procedure complications. CaStim was the only invasive preoperative localization modality used at our center. Endoscopic ultrasound (US) was not studied. We conducted a retrospective case review at a referral center. Twenty-nine women and 16 men (mean age, 47 yr; range, 13-78) were diagnosed with an insulinoma from 1996-2008. A supervised fast was conducted to confirm the diagnosis of insulinoma. US, computed tomography (CT), magnetic resonance imaging (MRI), and CaStim were used as preoperative localization studies. Localization predicted by each preoperative test was compared to surgical localization for accuracy. We measured the accuracy of US, CT, MRI, and CaStim for localization of insulinomas preoperatively. All 45 patients had surgically proven insulinomas. Thirty-eight of 45 (84%) localized to the correct anatomical region by CaStim. In five of 45 (11%) patients, the CaStim was falsely negative. Two of 45 (4%) had false-positive localizations. The CaStim has remained vastly superior to abdominal US, CT, or MRI over time as a preoperative localizing tool for insulinomas. The utility of the CaStim for this purpose and in this setting is thus validated.
Owens, Kailey M; Marvin, Monica L; Gelehrter, Thomas D; Ruffin, Mack T; Uhlmann, Wendy R
2011-10-01
This study examined medical students' and house officers' opinions about the Surgeon General's "My Family Health Portrait" (MFHP) tool. Participants used the tool and were surveyed about tool mechanics, potential clinical uses, and barriers. None of the 97 participants had previously used this tool. The average time to enter a family history was 15 min (range 3 to 45 min). Participants agreed or strongly agreed that the MFHP tool is understandable (98%), easy to use (93%), and suitable for general public use (84%). Sixty-seven percent would encourage their patients to use the tool; 39% would ensure staff assistance. Participants would use the tool to identify patients at increased risk for disease (86%), record family history in the medical chart (84%), recommend preventive health behaviors (80%), and refer to genetics services (72%). Concerns about use of the tool included patient access, information accuracy, technical challenges, and the need for physician education on interpreting family history information.
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
Quinn, Kieran L; Crystal, Eugene; Lashevsky, Ilan; Arouny, Banafsheh; Baranchuk, Adrian
2016-07-01
We have previously developed a novel digital tool capable of automatically recognizing correct electrocardiography (ECG) diagnoses in an online exam and demonstrated a significant improvement in diagnostic accuracy when utilizing an inductive-deductive reasoning strategy over a pattern recognition strategy. In this study, we sought to validate these findings from participants at the International Winter Arrhythmia School meeting, one of the foremost electrophysiology events in Canada. Preregistration to the event was sent by e-mail. The exam was administered on day 1 of the conference. Results and analysis were presented the following morning to participants. Twenty-five attendees completed the exam, providing a total of 500 responses to be marked. The online tool accurately identified 195 of a total of 395 (49%) correct responses (49%). In total, 305 responses required secondary manual review, of which 200 were added to the correct responses pool. The overall accuracy of correct ECG diagnosis for all participants was 69% and 84% when using pattern recognition or inductive-deductive strategies, respectively. Utilization of a novel digital tool to evaluate ECG competency can be set up as a workshop at international meetings or educational events. Results can be presented during the sessions to ensure immediate feedback. © 2015 Wiley Periodicals, Inc.
Yu-Fei, Wang; Wei-Ping, Jia; Ming-Hsun, Wu; Miao-O, Chien; Ming-Chang, Hsieh; Chi-Pin, Wang; Ming-Shih, Lee
2017-09-01
System accuracy of current blood glucose monitors (BGMs) in the market has already been evaluated extensively, yet mostly focused on European and North American manufacturers. Data on BGMs manufactured in the Asia-Pacific region remain to be established. In this study, we sought to assess the accuracy performance of 19 BGMs manufactured in the Asia-pacific region. A total of 19 BGMs were obtained from local pharmacies in China. The study was conducted at three hospitals located in the Asia-Pacific region. Measurement results of each system were compared with results of the reference instrument (YSI 2300 PLUS Glucose Analyzer), and accuracy evaluation was performed in accordance to the ISO 15197:2003 and updated 2015 guidelines. Radar plots, which is a new method, are described herein to visualize the analytical performance of the 19 BGMs evaluated. Consensus error grid is a tool for evaluating the clinical significance of the results. The 19 BGMs resulted in a satisfaction rate between 83.5% and 100.0% within ISO 15197:2003 error limits, and between 71.3% and 100.0% within EN ISO 15197:2015 (ISO 15197:2013) error limits. Of the 19 BGMs evaluated, 12 met the minimal accuracy requirement of the ISO 15197:2003 standard, whereas only 4 met the tighter EN ISO 15197:2015 (ISO 15197:2013) requirements. Accuracy evaluation of BGMs should be performed regularly to maximize patient safety.
Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert
2003-11-01
To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.
Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo
2017-05-01
Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.
How Decisions Evolve: The Temporal Dynamics of Action Selection
ERIC Educational Resources Information Center
Scherbaum, Stefan; Dshemuchadse, Maja; Fischer, Rico; Goschke, Thomas
2010-01-01
To study the process of decision-making under conflict, researchers typically analyze response latency and accuracy. However, these tools provide little evidence regarding how the resolution of conflict unfolds over time. Here, we analyzed the trajectories of mouse movements while participants performed a continuous version of a spatial conflict…
Voice Recognition: A New Assessment Tool?
ERIC Educational Resources Information Center
Jones, Darla
2005-01-01
This article presents the results of a study conducted in Anchorage, Alaska, that evaluated the accuracy and efficiency of using voice recognition (VR) technology to collect oral reading fluency data for classroom-based assessments. The primary research question was as follows: Is voice recognition technology a valid and reliable alternative to…
Teacher Logs: A Tool for Gaining a Comprehensive Understanding of Classroom Practices
ERIC Educational Resources Information Center
Glennie, Elizabeth J.; Charles, Karen J.; Rice, Olivia N.
2017-01-01
Examining repeated classroom encounters over time provides a comprehensive picture of activities. Studies of instructional practices in classrooms have traditionally relied on two methods: classroom observations, which are expensive, and surveys, which are limited in scope and accuracy. Teacher logs provide a "real-time" method for…
Can Visualizing Document Space Improve Users' Information Foraging?
ERIC Educational Resources Information Center
Song, Min
1998-01-01
This study shows how users access relevant information in a visualized document space and determine whether BiblioMapper, a visualization tool, strengthens an information retrieval (IR) system and makes it more usable. BiblioMapper, developed for a CISI collection, was evaluated by accuracy, time, and user satisfaction. Users' navigation…
Boerma, Tessel; Chiat, Shula; Leseman, Paul; Timmermeister, Mona; Wijnen, Frank; Blom, Elma
2015-12-01
This study evaluated a newly developed quasi-universal nonword repetition task (Q-U NWRT) as a diagnostic tool for bilingual children with language impairment (LI) who have Dutch as a 2nd language. The Q-U NWRT was designed to be minimally influenced by knowledge of 1 specific language in contrast to a language-specific NWRT with which it was compared. One hundred twenty monolingual and bilingual children with and without LI participated (30 per group). A mixed-design analysis of variance was used to investigate the effects of LI and bilingualism on the NWRTs. Receiver operating characteristic analyses were conducted to evaluate the instruments' diagnostic value. Large negative effects of LI were found on both NWRTs, whereas negative effects of bilingualism only occurred on the language-specific NWRT. Both instruments had high clinical accuracy in the monolingual group, but only the Q-U NWRT had high clinical accuracy in the bilingual group. This study indicates that the Q-U NWRT is a promising diagnostic tool to help identify LI in bilingual children learning Dutch as a 2nd language. The instrument was clinically accurate in both a monolingual and bilingual group of children and seems better able to disentangle LI from language disadvantage than more language-specific measures.
Ernst, Corinna; Hahnen, Eric; Engel, Christoph; Nothnagel, Michael; Weber, Jonas; Schmutzler, Rita K; Hauke, Jan
2018-03-27
The use of next-generation sequencing approaches in clinical diagnostics has led to a tremendous increase in data and a vast number of variants of uncertain significance that require interpretation. Therefore, prediction of the effects of missense mutations using in silico tools has become a frequently used approach. Aim of this study was to assess the reliability of in silico prediction as a basis for clinical decision making in the context of hereditary breast and/or ovarian cancer. We tested the performance of four prediction tools (Align-GVGD, SIFT, PolyPhen-2, MutationTaster2) using a set of 236 BRCA1/2 missense variants that had previously been classified by expert committees. However, a major pitfall in the creation of a reliable evaluation set for our purpose is the generally accepted classification of BRCA1/2 missense variants using the multifactorial likelihood model, which is partially based on Align-GVGD results. To overcome this drawback we identified 161 variants whose classification is independent of any previous in silico prediction. In addition to the performance as stand-alone tools we examined the sensitivity, specificity, accuracy and Matthews correlation coefficient (MCC) of combined approaches. PolyPhen-2 achieved the lowest sensitivity (0.67), specificity (0.67), accuracy (0.67) and MCC (0.39). Align-GVGD achieved the highest values of specificity (0.92), accuracy (0.92) and MCC (0.73), but was outperformed regarding its sensitivity (0.90) by SIFT (1.00) and MutationTaster2 (1.00). All tools suffered from poor specificities, resulting in an unacceptable proportion of false positive results in a clinical setting. This shortcoming could not be bypassed by combination of these tools. In the best case scenario, 138 families would be affected by the misclassification of neutral variants within the cohort of patients of the German Consortium for Hereditary Breast and Ovarian Cancer. We show that due to low specificities state-of-the-art in silico prediction tools are not suitable to predict pathogenicity of variants of uncertain significance in BRCA1/2. Thus, clinical consequences should never be based solely on in silico forecasts. However, our data suggests that SIFT and MutationTaster2 could be suitable to predict benignity, as both tools did not result in false negative predictions in our analysis.
Yingyongyudha, Anyamanee; Saengsirisuwan, Vitoon; Panichaporn, Wanvisa; Boonsinsukh, Rumpa
2016-01-01
Balance deficits a significant predictor of falls in older adults. The Balance Evaluation Systems Test (BESTest) and the Mini-Balance Evaluation Systems Test (Mini-BESTest) are tools that may predict the likelihood of a fall, but their capabilities and accuracies have not been adequately addressed. Therefore, this study aimed at examining the capabilities of the BESTest and Mini-BESTest for identifying older adult with history of falls and comparing the participants with history of falls identification accuracy of the BESTest, Mini-BESTest, Berg Balance Scale (BBS), and the Timed Up and Go Test (TUG) for identifying participants with a history of falls. Two hundred healthy older adults with a mean age of 70 years were classified into participants with and without history of fall groups on the basis of their 12-month fall history. Their balance abilities were assessed using the BESTest, Mini-BESTest, BBS, and TUG. An analysis of the resulting receiver operating characteristic curves was performed to calculate the area under the curve (AUC), sensitivity, specificity, cutoff score, and posttest accuracy of each. The Mini-BESTest showed the highest AUC (0.84) compared with the BESTest (0.74), BBS (0.69), and TUG (0.35), suggesting that the Mini-BESTest had the highest accuracy in identifying older adult with history of falls. At the cutoff score of 16 (out of 28), the Mini-BESTest demonstrated a posttest accuracy of 85% with a sensitivity of 85% and specificity of 75%. The Mini-BESTest had the highest posttest accuracy, with the others having results of 76% (BESTest), 60% (BBS), and 65% (TUG). The Mini-BESTest is the most accurate tool for identifying older adult with history of falls compared with the BESTest, BBS, and TUG.
Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect
NASA Astrophysics Data System (ADS)
Chao, Chia-Chun George
2009-03-01
The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.
Data and Tools | Concentrating Solar Power | NREL
download. Solar Power tower Integrated Layout and Optimization Tool (SolarPILOT(tm)) The SolarPILOT is code rapid layout and optimization capability of the analytical DELSOL3 program with the accuracy and
Day, Robert Dean; Foreman, Larry R.; Hatch, Douglas J.; Meadows, Mark S.
1998-01-01
There is provided an apparatus for machining surfaces to accuracies within the nanometer range by use of electrical current flow through the contact of the cutting tool with the workpiece as a feedback signal to control depth of cut.
Pepin, K.M.; Spackman, E.; Brown, J.D.; Pabilonia, K.L.; Garber, L.P.; Weaver, J.T.; Kennedy, D.A.; Patyk, K.A.; Huyvaert, K.P.; Miller, R.S.; Franklin, A.B.; Pedersen, K.; Bogich, T.L.; Rohani, P.; Shriner, S.A.; Webb, C.T.; Riley, S.
2014-01-01
Wild birds are the primary source of genetic diversity for influenza A viruses that eventually emerge in poultry and humans. Much progress has been made in the descriptive ecology of avian influenza viruses (AIVs), but contributions are less evident from quantitative studies (e.g., those including disease dynamic models). Transmission between host species, individuals and flocks has not been measured with sufficient accuracy to allow robust quantitative evaluation of alternate control protocols. We focused on the United States of America (USA) as a case study for determining the state of our quantitative knowledge of potential AIV emergence processes from wild hosts to poultry. We identified priorities for quantitative research that would build on existing tools for responding to AIV in poultry and concluded that the following knowledge gaps can be addressed with current empirical data: (1) quantification of the spatio-temporal relationships between AIV prevalence in wild hosts and poultry populations, (2) understanding how the structure of different poultry sectors impacts within-flock transmission, (3) determining mechanisms and rates of between-farm spread, and (4) validating current policy-decision tools with data. The modeling studies we recommend will improve our mechanistic understanding of potential AIV transmission patterns in USA poultry, leading to improved measures of accuracy and reduced uncertainty when evaluating alternative control strategies. PMID:24462191
A CT-based software tool for evaluating compensator quality in passively scattered proton therapy
NASA Astrophysics Data System (ADS)
Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald
2010-11-01
We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.
Gaber, Ramy M; Shaheen, Eman; Falter, Bart; Araya, Sebastian; Politis, Constantinus; Swennen, Gwen R J; Jacobs, Reinhilde
2017-11-01
The aim of this study was to systematically review methods used for assessing the accuracy of 3-dimensional virtually planned orthognathic surgery in an attempt to reach an objective assessment protocol that could be universally used. A systematic review of the currently available literature, published until September 12, 2016, was conducted using PubMed as the primary search engine. We performed secondary searches using the Cochrane Database, clinical trial registries, Google Scholar, and Embase, as well as a bibliography search. Included articles were required to have stated clearly that 3-dimensional virtual planning was used and accuracy assessment performed, along with validation of the planning and/or assessment method. Descriptive statistics and quality assessment of included articles were performed. The initial search yielded 1,461 studies. Only 7 studies were included in our review. An important variability was found regarding methods used for 1) accuracy assessment of virtually planned orthognathic surgery or 2) validation of the tools used. Included studies were of moderate quality; reviewers' agreement regarding quality was calculated to be 0.5 using the Cohen κ test. On the basis of the findings of this review, it is evident that the literature lacks consensus regarding accuracy assessment. Hence, a protocol is suggested for accuracy assessment of virtually planned orthognathic surgery with the lowest margin of error. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M
2009-05-02
Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.
Effects of personal identifier resynthesis on clinical text de-identification.
Yeniterzi, Reyyan; Aberdeen, John; Bayer, Samuel; Wellner, Ben; Hirschman, Lynette; Malin, Bradley
2010-01-01
De-identified medical records are critical to biomedical research. Text de-identification software exists, including "resynthesis" components that replace real identifiers with synthetic identifiers. The goal of this research is to evaluate the effectiveness and examine possible bias introduced by resynthesis on de-identification software. We evaluated the open-source MITRE Identification Scrubber Toolkit, which includes a resynthesis capability, with clinical text from Vanderbilt University Medical Center patient records. We investigated four record classes from over 500 patients' files, including laboratory reports, medication orders, discharge summaries and clinical notes. We trained and tested the de-identification tool on real and resynthesized records. We measured performance in terms of precision, recall, F-measure and accuracy for the detection of protected health identifiers as designated by the HIPAA Safe Harbor Rule. The de-identification tool was trained and tested on a collection of real and resynthesized Vanderbilt records. Results for training and testing on the real records were 0.990 accuracy and 0.960 F-measure. The results improved when trained and tested on resynthesized records with 0.998 accuracy and 0.980 F-measure but deteriorated moderately when trained on real records and tested on resynthesized records with 0.989 accuracy 0.862 F-measure. Moreover, the results declined significantly when trained on resynthesized records and tested on real records with 0.942 accuracy and 0.728 F-measure. The de-identification tool achieves high accuracy when training and test sets are homogeneous (ie, both real or resynthesized records). The resynthesis component regularizes the data to make them less "realistic," resulting in loss of performance particularly when training on resynthesized data and testing on real data.
A Software Upgrade of the NASA Aeroheating Code "MINIVER"
NASA Technical Reports Server (NTRS)
Louderback, Pierce Mathew
2013-01-01
Computational Fluid Dynamics (CFD) is a powerful and versatile tool simulating fluid and thermal environments of launch and re-entry vehicles alike. Where it excels in power and accuracy, however, it lacks in speed. An alternative tool for this purpose is known as MINIVER, an aeroheating code widely used by NASA and within the aerospace industry. Capable of providing swift, reasonably accurate approximations of the fluid and thermal environment of launch vehicles, MINIVER is used where time is of the essence and accuracy need not be exact. However, MINIVER is an old, aging tool: running on a user-unfriendly, legacy command-line interface, it is difficult for it to keep pace with more modem software tools. Florida Institute of Technology was tasked with the construction of a new Graphical User Interface (GUI) that implemented the legacy version's capabilities and enhanced them with new tools and utilities. This thesis provides background to the legacy version of the program, the progression and final version of a modem user interface, and benchmarks to demonstrate its usefulness.
NASA Astrophysics Data System (ADS)
Sousa, Andre R.; Schneider, Carlos A.
2001-09-01
A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.
Jin, Ting; Fei, Baoying; Zhang, Yu; He, Xujun
2017-01-01
Intestinal tuberculosis (ITB) and Crohn's disease (CD) are important differential diagnoses that can be difficult to distinguish. Polymerase chain reaction (PCR) for Mycobacterium tuberculosis (MTB) is an efficient and promising tool. This meta-analysis was performed to systematically and objectively assess the potential diagnostic accuracy and clinical value of PCR for MTB in distinguishing ITB from CD. We searched PubMed, Embase, Web of Science, Science Direct, and the Cochrane Library for eligible studies, and nine articles with 12 groups of data were identified. The included studies were subjected to quality assessment using the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The summary estimates were as follows: sensitivity 0.47 (95% CI: 0.42-0.51); specificity 0.95 (95% CI: 0.93-0.97); the positive likelihood ratio (PLR) 10.68 (95% CI: 6.98-16.35); the negative likelihood ratio (NLR) 0.49 (95% CI: 0.33-0.71); and diagnostic odds ratio (DOR) 21.92 (95% CI: 13.17-36.48). The area under the curve (AUC) was 0.9311, with a Q* value of 0.8664. Heterogeneity was found in the NLR. The heterogeneity of the studies was evaluated by meta-regression analysis and subgroup analysis. The current evidence suggests that PCR for MTB is a promising and highly specific diagnostic method to distinguish ITB from CD. However, physicians should also keep in mind that negative results cannot exclude ITB for its low sensitivity. Additional prospective studies are needed to further evaluate the diagnostic accuracy of PCR.
Evaluation of the nutrition screening tool for childhood cancer (SCAN).
Murphy, Alexia J; White, Melinda; Viani, Karina; Mosby, Terezie T
2016-02-01
Malnutrition is a serious concern for children with cancer and nutrition screening may offer a simple alternative to nutrition assessment for identifying children with cancer who are at risk of malnutrition. The present paper aimed to evaluate the nutrition screening tool for childhood cancer (SCAN). SCAN was developed after an extensive review of currently available tools and published screening recommendation, consideration of pediatric oncology nutrition guidelines, piloting questions, and consulting with members of International Pediatric Oncology Nutrition Group. In Study 1, the accuracy and validity of SCAN against pediatric subjective global nutrition assessment (pediatric SGNA) was determined. In Study 2, subjects were classified as 'at risk of malnutrition' and 'not at risk of malnutrition' according to SCAN and measures of height, weight, body mass index (BMI) and body composition were compared between the groups. The validation of SCAN against pediatric SGNA showed SCAN had 'excellent' accuracy (0.90, 95% CI 0.78-1.00; p < 0.001), 100% sensitivity, 39% specificity, 56% positive predictive value and 100% negative predictive value. When subjects in Study 2 were classified into 'at risk of malnutrition' and 'not at risk of malnutrition' according to SCAN, the 'at risk of malnutrition' group had significantly lower values for weight Z score (p = 0.001), BMI Z score (p = 0.001) and fat mass index (FMI) (p = 0.04), than the 'not at risk of malnutrition' group. This study shows that SCAN is a simple, quick and valid tool which can be used to identify children with cancer who are at risk of malnutrition. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Day, R.D.; Foreman, L.R.; Hatch, D.J.; Meadows, M.S.
1998-09-08
There is provided an apparatus for machining surfaces to accuracies within the nanometer range by use of electrical current flow through the contact of the cutting tool with the workpiece as a feedback signal to control depth of cut. 3 figs.
Evaluating the Utility of Web-Based Consumer Support Tools Using Rough Sets
NASA Astrophysics Data System (ADS)
Maciag, Timothy; Hepting, Daryl H.; Slezak, Dominik; Hilderman, Robert J.
On the Web, many popular e-commerce sites provide consumers with decision support tools to assist them in their commerce-related decision-making. Many consumers will rank the utility of these tools quite highly. Data obtained from web usage mining analyses, which may provide knowledge about a user's online experiences, could help indicate the utility of these tools. This type of analysis could provide insight into whether provided tools are adequately assisting consumers in conducting their online shopping activities or if new or additional enhancements need consideration. Although some research in this regard has been described in previous literature, there is still much that can be done. The authors of this paper hypothesize that a measurement of consumer decision accuracy, i.e. a measurement preferences, could help indicate the utility of these tools. This paper describes a procedure developed towards this goal using elements of rough set theory. The authors evaluated the procedure using two support tools, one based on a tool developed by the US-EPA and the other developed by one of the authors called cogito. Results from the evaluation did provide interesting insights on the utility of both support tools. Although it was shown that the cogito tool obtained slightly higher decision accuracy, both tools could be improved from additional enhancements. Details of the procedure developed and results obtained from the evaluation will be provided. Opportunities for future work are also discussed.
Prehospital lung ultrasound for the diagnosis of cardiogenic pulmonary oedema: a pilot study.
Laursen, Christian B; Hänselmann, Anja; Posth, Stefan; Mikkelsen, Søren; Videbæk, Lars; Berg, Henrik
2016-08-02
An improved prehospital diagnostic accuracy of cardiogenic pulmonary oedema could potentially improve initial treatment, triage, and outcome. A pilot study was conducted to assess the feasibility, time-use, and diagnostic accuracy of prehospital lung ultrasound (PLUS) for the diagnosis of cardiogenic pulmonary oedema. A prospective observational study was conducted in a prehospital setting. Patients were included if the physician based prehospital mobile emergency care unit was activated and one or more of the following two were present: respiratory rate >30/min., oxygen saturation <90 %. Exclusion criteria were: age <18 years, permanent mental disability or PLUS causing a delay in life-saving treatment or transportation. Following clinical assessment PLUS was performed and presence or absence of interstitial syndrome was registered. Audit by three physicians using predefined diagnostic criteria for cardiogenic pulmonary oedema was used as gold standard. A total of 40 patients were included in the study. Feasibility of PLUS was 100 % and median time used was 3 min. The gold standard diagnosed 18 (45.0 %) patients with cardiogenic pulmonary oedema. The diagnostic accuracy of PLUS for the diagnosis of cardiogenic pulmonary oedema was: sensitivity 94.4 % (95 % confidence interval (CI) 72.7-99.9 %), specificity 77.3 % (95 % CI 54.6-92.2 %), positive predictive value 77.3 % (95 % CI 54.6-92.2 %), negative predictive value 94.4 % (95 % CI 72.7-99.9 %). The sensitivity of PLUS is high, making it a potential tool for ruling-out cardiogenic pulmonary. The observed specificity was lower than what has been described in previous studies. Performed, as part of a physician based prehospital emergency service, PLUS seems fast and highly feasible in patients with respiratory failure. Due to its diagnostic accuracy, PLUS may have potential as a prehospital tool, especially to rule out cardiogenic pulmonary oedema.
Radanovic, Marcia; Facco, Giuliana; Forlenza, Orestes V
2018-05-01
To create a reduced and briefer version of the widely used Cambridge Cognitive Examination (CAMCog) battery as a concise cognitive test to be used in primary and secondary levels of health care to detect cognitive decline. Our aim was to reduce the administration time of the original test while maintaining its diagnostic accuracy. On the basis of the analysis of 835 CAMCog tests performed by 429 subjects (107 controls, 192 mild cognitive impairment [MCI], and 130 dementia patients), we extracted items that most contributed to intergroup differentiation, according to 2 educational levels (≤8 and >8 y of formal schooling). The final 33-item "low education" and 24-item"high education" CAMCog-Short correspond to 48.5% and 35% of the original version and yielded similar rates of accuracy: area under ROC curves (AUC) > 0.9 in the differentiation between controls × dementia and MCI × dementia (sensitivities > 75%; specificities > 90%); AUC > 0.7 for the differentiation between controls and MCI (sensitivities > 65%; specificities > 75%). The CAMCog-Short emerges as a promising tool for a brief, yet sufficiently accurate, screening tool for use in clinical settings. Further prospective studies designed to validate its diagnostic accuracy are needed. Copyright © 2018 John Wiley & Sons, Ltd.
Labanca, Ludimila; Guimarães, Fernando Sales; Costa-Guarisco, Letícia Pimenta; Couto, Erica de Araújo Brandão; Gonçalves, Denise Utsch
2017-11-01
Given the high prevalence of presbycusis and its detrimental effect on quality of life, screening tests can be useful tools for detecting hearing loss in primary care settings. This study therefore aimed to determine the accuracy and reproducibility of the whispered voice test as a screening method for detecting hearing impairment in older people. This cross-sectional study was carried out with 210 older adults aged between 60 and 97 years who underwent the whispered voice test employing ten different phrases and using audiometry as a reference test. Sensitivity, specificity and positive and negative predictive values were calculated and accuracy was measured by calculating the area under the ROC curve. The test was repeated on 20% of the ears by a second examiner to assess inter-examiner reproducibility (IER). The words and phrases that showed the highest area under the curve (AUC) and IER values were: "shoe" (AUC = 0.918; IER = 0.877), "window" (AUC = 0.917; IER = 0.869), "it looks like it's going to rain" (AUC = 0.911; IER = 0.810), and "the bus is late" (AUC = 0.900; IER = 0.810), demonstrating that the whispered voice test is a useful screening tool for detecting hearing loss among older people. It is proposed that these words and phrases should be incorporated into the whispered voice test protocol.
Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.
Rubin, Katrine Hass; Friis-Holmberg, Teresa; Hermann, Anne Pernille; Abrahamsen, Bo; Brixen, Kim
2013-08-01
A huge number of risk assessment tools have been developed. Far from all have been validated in external studies, more of them have absence of methodological and transparent evidence, and few are integrated in national guidelines. Therefore, we performed a systematic review to provide an overview of existing valid and reliable risk assessment tools for prediction of osteoporotic fractures. Additionally, we aimed to determine if the performance of each tool was sufficient for practical use, and last, to examine whether the complexity of the tools influenced their discriminative power. We searched PubMed, Embase, and Cochrane databases for papers and evaluated these with respect to methodological quality using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) checklist. A total of 48 tools were identified; 20 had been externally validated, however, only six tools had been tested more than once in a population-based setting with acceptable methodological quality. None of the tools performed consistently better than the others and simple tools (i.e., the Osteoporosis Self-assessment Tool [OST], Osteoporosis Risk Assessment Instrument [ORAI], and Garvan Fracture Risk Calculator [Garvan]) often did as well or better than more complex tools (i.e., Simple Calculated Risk Estimation Score [SCORE], WHO Fracture Risk Assessment Tool [FRAX], and Qfracture). No studies determined the effectiveness of tools in selecting patients for therapy and thus improving fracture outcomes. High-quality studies in randomized design with population-based cohorts with different case mixes are needed. Copyright © 2013 American Society for Bone and Mineral Research.
Conser, Christiana; Seebacher, Lizbeth; Fujino, David W; Reichard, Sarah; DiTomaso, Joseph M
2015-01-01
Weed Risk Assessment (WRA) methods for evaluating invasiveness in plants have evolved rapidly in the last two decades. Many WRA tools exist, but none were specifically designed to screen ornamental plants prior to being released into the environment. To be accepted as a tool to evaluate ornamental plants for the nursery industry, it is critical that a WRA tool accurately predicts non-invasiveness without falsely categorizing them as invasive. We developed a new Plant Risk Evaluation (PRE) tool for ornamental plants. The 19 questions in the final PRE tool were narrowed down from 56 original questions from existing WRA tools. We evaluated the 56 WRA questions by screening 21 known invasive and 14 known non-invasive ornamental plants. After statistically comparing the predictability of each question and the frequency the question could be answered for both invasive and non-invasive species, we eliminated questions that provided no predictive power, were irrelevant in our current model, or could not be answered reliably at a high enough percentage. We also combined many similar questions. The final 19 remaining PRE questions were further tested for accuracy using 56 additional known invasive plants and 36 known non-invasive ornamental species. The resulting evaluation demonstrated that when "needs further evaluation" classifications were not included, the accuracy of the model was 100% for both predicting invasiveness and non-invasiveness. When "needs further evaluation" classifications were included as either false positive or false negative, the model was still 93% accurate in predicting invasiveness and 97% accurate in predicting non-invasiveness, with an overall accuracy of 95%. We conclude that the PRE tool should not only provide growers with a method to accurately screen their current stock and potential new introductions, but also increase the probability of the tool being accepted for use by the industry as the basis for a nursery certification program.
Conser, Christiana; Seebacher, Lizbeth; Fujino, David W.; Reichard, Sarah; DiTomaso, Joseph M.
2015-01-01
Weed Risk Assessment (WRA) methods for evaluating invasiveness in plants have evolved rapidly in the last two decades. Many WRA tools exist, but none were specifically designed to screen ornamental plants prior to being released into the environment. To be accepted as a tool to evaluate ornamental plants for the nursery industry, it is critical that a WRA tool accurately predicts non-invasiveness without falsely categorizing them as invasive. We developed a new Plant Risk Evaluation (PRE) tool for ornamental plants. The 19 questions in the final PRE tool were narrowed down from 56 original questions from existing WRA tools. We evaluated the 56 WRA questions by screening 21 known invasive and 14 known non-invasive ornamental plants. After statistically comparing the predictability of each question and the frequency the question could be answered for both invasive and non-invasive species, we eliminated questions that provided no predictive power, were irrelevant in our current model, or could not be answered reliably at a high enough percentage. We also combined many similar questions. The final 19 remaining PRE questions were further tested for accuracy using 56 additional known invasive plants and 36 known non-invasive ornamental species. The resulting evaluation demonstrated that when “needs further evaluation” classifications were not included, the accuracy of the model was 100% for both predicting invasiveness and non-invasiveness. When “needs further evaluation” classifications were included as either false positive or false negative, the model was still 93% accurate in predicting invasiveness and 97% accurate in predicting non-invasiveness, with an overall accuracy of 95%. We conclude that the PRE tool should not only provide growers with a method to accurately screen their current stock and potential new introductions, but also increase the probability of the tool being accepted for use by the industry as the basis for a nursery certification program. PMID:25803830
Rys, Dawid
2017-01-01
Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy. PMID:28880215
Burnos, Piotr; Rys, Dawid
2017-09-07
Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy.
An experimental method for the assessment of color simulation tools.
Lillo, Julio; Alvaro, Leticia; Moreira, Humberto
2014-07-22
The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.
The NASA Constellation University Institutes Project: Thrust Chamber Assembly Virtual Institute
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Rybak, Jeffry A.; Hulka, James R.; Jones, Gregg W.; Nesman, Tomas; West, Jeffrey S.
2006-01-01
This paper documents key aspects of the Constellation University Institutes Project (CUIP) Thrust Chamber Assembly (TCA) Virtual Institute (VI). Specifically, the paper details the TCA VI organizational and functional aspects relative to providing support for Constellation Systems. The TCA VI vision is put forth and discussed in detail. The vision provides the objective and approach for improving thrust chamber assembly design methodologies by replacing the current empirical tools with verified and validated CFD codes. The vision also sets out ignition, performance, thermal environments and combustion stability as focus areas where application of these improved tools is required. Flow physics and a study of the Space Shuttle Main Engine development program are used to conclude that the injector is the key to robust TCA design. Requirements are set out in terms of fidelity, robustness and demonstrated accuracy of the design tool. Lack of demonstrated accuracy is noted as the most significant obstacle to realizing the potential of CFD to be widely used as an injector design tool. A hierarchical decomposition process is outlined to facilitate the validation process. A simulation readiness level tool used to gauge progress toward the goal is described. Finally, there is a description of the current efforts in each focus area. The background of each focus area is discussed. The state of the art in each focus area is noted along with the TCA VI research focus in the area. Brief highlights of work in the area are also included.
Quantifying cannabis: A field study of marijuana quantity estimation.
Prince, Mark A; Conner, Bradley T; Pearson, Matthew R
2018-06-01
The assessment of marijuana use quantity poses unique challenges. These challenges have limited research efforts on quantity assessments. However, quantity estimates are critical to detecting associations between marijuana use and outcomes. We examined accuracy of marijuana users' estimations of quantities of marijuana they prepared to ingest and predictors of both how much was prepared for a single dose and the degree of (in)accuracy of participants' estimates. We recruited a sample of 128 regular-to-heavy marijuana users for a field study wherein they prepared and estimated quantities of marijuana flower in a joint or a bowl as well as marijuana concentrate using a dab tool. The vast majority of participants overestimated the quantity of marijuana that they used in their preparations. We failed to find robust predictors of estimation accuracy. Self-reported quantity estimates are inaccurate, which has implications for studying the link between quantity and marijuana use outcomes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A scrutiny of tools used for assessment of hospital disaster preparedness in Iran.
Heidaranlu, Esmail; Ebadi, Abbas; Ardalan, Ali; Khankeh, Hamidreza
2015-01-01
In emergencies and disasters, hospitals are among the first and most vital organizations involved. To determine preparedness of a hospital to deal with crisis, health system requires tools compatible with the type of crisis. The present study aimed to evaluate the accuracy of tools used for assessment of hospitals preparedness for major emergencies and disasters in Iran. In this review study, all studies conducted on hospital preparedness to deal with disasters in Iran in the interim 2000-2015 were examined. The World Health Organization (WHO) criteria were used to assess focus of studies for entry in this study. Of the 36 articles obtained, 28 articles that met inclusion criteria were analyzed. In accordance with the WHO standards, focus of tools used was examined in three areas (structural, nonstructural, and functional). In nonstructural area, the most focus of preparation tools was on medical gases, and the least focus on office and storeroom furnishings and equipment. In the functional area, the most focus was on operational plan, and the least on business continuity. Half of the tools in domestic studies considered structural safety as indicator of hospital preparedness. The present study showed that tools used contain a few indicators approved by the WHO, especially in the functional area. Moreover, a lack of a standard indigenous tool was evident, especially in the functional area. Thus, to assess hospital disaster preparedness, the national health system requires new tools compatible with scientific tool design principles, to enable a more accurate prediction of hospital preparedness in disasters before they occur.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
A Complete and Accurate Ab Initio Repeat Finding Algorithm.
Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua
2016-03-01
It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy.
The Web as a Reference Tool: Comparisons with Traditional Sources.
ERIC Educational Resources Information Center
Janes, Joseph; McClure, Charles R.
1999-01-01
This preliminary study suggests that the same level of timeliness and accuracy can be obtained for answers to reference questions using resources in freely available World Wide Web sites as with traditional print-based resources. Discusses implications for library collection development, new models of consortia, training needs, and costing and…
The hidden KPI registration accuracy.
Shorrosh, Paul
2011-09-01
Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.
Pre-Then-Post Testing: A Tool To Improve the Accuracy of Management Training Program Evaluation.
ERIC Educational Resources Information Center
Mezoff, Bob
1981-01-01
Explains a procedure to avoid the detrimental biases of conventional self-reports of training outcomes. The evaluation format provided is a method for using statistical procedures to increase the accuracy of self-reports by overcoming response-shift-bias. (Author/MER)
Subjective Appraisal as a Feedback Tool. Technical Report 604.
ERIC Educational Resources Information Center
Burnside, Billy L.
This report examines the accuracy of subjective appraisals of several aspects of task performance, including proficiency, difficulty, frequency, and criticality. An introduction discusses current Army use of subjective appraisal, feedback methods, and problems with subjective appraisal. Data pertaining to the accuracy of various types of appraisal…
Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools
NASA Astrophysics Data System (ADS)
Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu
2018-03-01
Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.
Diaphragm and Lung Ultrasound to Predict Weaning Outcome: Systematic Review and Meta-Analysis.
Llamas-Álvarez, Ana M; Tenza-Lozano, Eva M; Latour-Pérez, Jaime
2017-12-01
Deciding the optimal timing for extubation in patients who are mechanically ventilated can be challenging, and traditional weaning predictor tools are not very accurate. The aim of this systematic review and meta-analysis was to assess the accuracy of lung and diaphragm ultrasound for predicting weaning outcomes in critically ill adults. MEDLINE, the Cochrane Library, Web of Science, Scopus, LILACS, Teseo, Tesis Doctorales en Red, and OpenGrey were searched, and the bibliographies of relevant studies were reviewed. Two researchers independently selected studies that met the inclusion criteria and assessed study quality in accordance with the Quality Assessment of Diagnostic Accuracy Studies-2 tool. The summary receiver-operating characteristic curve and pooled diagnostic OR (DOR) were estimated by using a bivariate random effects analysis. Sources of heterogeneity were explored by using predefined subgroup analyses and bivariate meta-regression. Nineteen studies involving 1,071 people were included in the study. For diaphragm thickening fraction, the area under the summary receiver-operating characteristic curve was 0.87, and DOR was 21 (95% CI, 11-40). Regarding diaphragmatic excursion, pooled sensitivity was 75% (95% CI, 65-85); pooled specificity, 75% (95% CI, 60-85); and DOR, 10 (95% CI, 4-24). For lung ultrasound, the area under the summary receiver-operating characteristic curve was 0.77, and DOR was 38 (95% CI, 7-198). Based on bivariate meta-regression analysis, a significantly higher specificity for diaphragm thickening fraction and higher sensitivity for diaphragmatic excursion was detected in studies with applicability concerns. Lung and diaphragm ultrasound can help predict weaning outcome, but its accuracy may vary depending on the patient subpopulation. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
The Accuracy of Preoperative Rigid Stroboscopy in the Evaluation of Voice Disorders in Children.
Mansour, Jobran; Amir, Ofer; Sagiv, Doron; Alon, Eran E; Wolf, Michael; Primov-Fever, Adi
2017-07-01
Stroboscopy is considered the most appropriate tool for evaluating the function of the vocal folds but may harbor significant limitations in children. Still, direct laryngoscopy (DL), under general anesthesia, is regarded the "gold standard" for establishing a diagnosis of vocal fold pathology. The aim of the study is to examine the accuracy of preoperative rigid stroboscopy in children with voice disorders. This is a retrospective study. A retrospective study was conducted on a cohort of 39 children with dysphonia, aged 4 to 18 years, who underwent DL. Twenty-six children underwent rigid stroboscopy (RS) prior to surgery and 13 children underwent fiber-optic laryngoscopy. The preoperative diagnoses were matched with intraoperative (DL) findings. DL was found to contradict preoperative evaluations in 20 out of 39 children (51%) and in 26 out of 53 of the findings (49%). Overdiagnosis of cysts and underdiagnosis of sulci were noted in RS compared to DL. The overall rate of accuracy for RS was 64%. The accuracy of rigid stroboscopy in the evaluation of children with voice disorders was found to be similar with previous reports in adults. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
MicroRNA based Pan-Cancer Diagnosis and Treatment Recommendation.
Cheerla, Nikhil; Gevaert, Olivier
2017-01-13
The current state-of-the-art in cancer diagnosis and treatment is not ideal; diagnostic tests are accurate but invasive, and treatments are "one-size fits-all" instead of being personalized. Recently, miRNA's have garnered significant attention as cancer biomarkers, owing to their ease of access (circulating miRNA in the blood) and stability. There have been many studies showing the effectiveness of miRNA data in diagnosing specific cancer types, but few studies explore the role of miRNA in predicting treatment outcome. Here we go a step further, using tissue miRNA and clinical data across 21 cancers from the 'The Cancer Genome Atlas' (TCGA) database. We use machine learning techniques to create an accurate pan-cancer diagnosis system, and a prediction model for treatment outcomes. Finally, using these models, we create a web-based tool that diagnoses cancer and recommends the best treatment options. We achieved 97.2% accuracy for classification using a support vector machine classifier with radial basis. The accuracies improved to 99.9-100% when climbing up the embryonic tree and classifying cancers at different stages. We define the accuracy as the ratio of the total number of instances correctly classified to the total instances. The classifier also performed well, achieving greater than 80% sensitivity for many cancer types on independent validation datasets. Many miRNAs selected by our feature selection algorithm had strong previous associations to various cancers and tumor progression. Then, using miRNA, clinical and treatment data and encoding it in a machine-learning readable format, we built a prognosis predictor model to predict the outcome of treatment with 85% accuracy. We used this model to create a tool that recommends personalized treatment regimens. Both the diagnosis and prognosis model, incorporating semi-supervised learning techniques to improve their accuracies with repeated use, were uploaded online for easy access. Our research is a step towards the final goal of diagnosing cancer and predicting treatment recommendations using non-invasive blood tests.
Frequency Response Studies using Receptance Coupling Approach in High Speed Spindles
NASA Astrophysics Data System (ADS)
Shaik, Jakeer Hussain; Ramakotaiah, K.; Srinivas, J.
2018-01-01
In order to assess the stability of high speed machining, estimate the frequency response at the end of tool tip is of great importance. Evaluating dynamic response of several combinations of integrated spindle-tool holder-tool will consume a lot of time. This paper presents coupled field dynamic response at tool tip for the entire integrated spindle tool unit. The spindle unit is assumed to be relying over the front and rear bearings and investigated using the Timoshenko beam theory to arrive the receptances at different locations of the spindle-tool unit. The responses are further validated with conventional finite element model as well as with the experiments. This approach permits quick outputs without losing accuracy of solution and further these methods are utilized to analyze the various design variables on system dynamics. The results obtained through this analysis are needed to design the better spindle unit in an attempt to reduce the frequency amplitudes at the tool tip to improvise the milling stability during cutting process.
McFarlane, Judith; Pennings, Jacquelyn; Liu, Fuqin; Gilroy, Heidi; Nava, Angeles; Maddoux, John A; Montalvo-Liendo, Nora; Paulson, René
2016-02-01
To develop a tool to predict risk for return to a shelter, 150 women with children, exiting a domestic violence shelter, were evaluated every 4 months for 24 months to determine risk factors for returning to a shelter. The study identified four risk factors, including danger for murder, woman's age (i.e., older women), tangible support (i.e., access to money, transportation), and child witness to verbal abuse of the mother. An easy to use, quick triage tool with a weighted score was derived, which can identify with 90% accuracy abused women with children most likely to return to shelters. © The Author(s) 2015.
Percutaneous spinal fixation simulation with virtual reality and haptics.
Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z
2013-01-01
In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.
Winterhalter, Wade E.
2011-09-01
Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less
Petta, S; Wong, V W-S; Cammà, C; Hiriart, J-B; Wong, G L-H; Vergniol, J; Chan, A W-H; Di Marco, V; Merrouche, W; Chan, H L-Y; Marra, F; Le-Bail, B; Arena, U; Craxì, A; de Ledinghen, V
2017-09-01
The accuracy of available non-invasive tools for staging severe fibrosis in patients with nonalcoholic fatty liver disease (NAFLD) is still limited. To assess the diagnostic performance of paired or serial combination of non-invasive tools in NAFLD patients. We analysed data from 741 patients with a histological diagnosis of NAFLD. The GGT/PLT, APRI, AST/ALT, BARD, FIB-4, and NAFLD Fibrosis Score (NFS) scores were calculated according to published algorithms. Liver stiffness measurement (LSM) was performed by FibroScan. LSM, NFS and FIB-4 were the best non-invasive tools for staging F3-F4 fibrosis (AUC 0.863, 0.774, and 0.792, respectively), with LSM having the highest sensitivity (90%), and the highest NPV (94%), and NFS and FIB-4 the highest specificity (97% and 93%, respectively), and the highest PPV (73% and 79%, respectively). The paired combination of LSM or NFS with FIB-4 strongly reduced the likelihood of wrongly classified patients (ranging from 2.7% to 2.6%), at the price of a high uncertainty area (ranging from 54.1% to 58.2%), and of a low overall accuracy (ranging from 43% to 39.1%). The serial combination with the second test used in patients in the grey area of the first test and in those with high LSM values (>9.6 KPa) or low NFS or FIB-4 values (<-1.455 and <1.30, respectively) overall increased the diagnostic performance generating an accuracy ranging from 69.8% to 70.1%, an uncertainty area ranging from 18.9% to 20.4% and a rate of wrong classification ranging from 9.2% to 11.3%. The serial combination of LSM with FIB-4/NFS has a good diagnostic accuracy for the non-invasive diagnosis of severe fibrosis in NAFLD. © 2017 John Wiley & Sons Ltd.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
NASA Astrophysics Data System (ADS)
Ma, Zhichao; Hu, Leilei; Zhao, Hongwei; Wu, Boda; Peng, Zhenxing; Zhou, Xiaoqin; Zhang, Hongguo; Zhu, Shuai; Xing, Lifeng; Hu, Huang
2010-08-01
The theories and techniques for improving machining accuracy via position control of diamond tool's tip and raising resolution of cutting depth on precise CNC lathes have been extremely focused on. A new piezo-driven ultra-precision machine tool servo system is designed and tested to improve manufacturing accuracy of workpiece. The mathematical model of machine tool servo system is established and the finite element analysis is carried out on parallel plate flexure hinges. The output position of diamond tool's tip driven by the machine tool servo system is tested via a contact capacitive displacement sensor. Proportional, integral, derivative (PID) feedback is also implemented to accommodate and compensate dynamical change owing cutting forces as well as the inherent non-linearity factors of the piezoelectric stack during cutting process. By closed loop feedback controlling strategy, the tracking error is limited to 0.8 μm. Experimental results have shown the proposed machine tool servo system could provide a tool positioning resolution of 12 nm, which is much accurate than the inherent CNC resolution magnitude. The stepped shaft of aluminum specimen with a step increment of cutting depth of 1 μm is tested, and the obtained contour illustrates the displacement command output from controller is accurately and real-time reflected on the machined part.
Acoustic localization at large scales: a promising method for grey wolf monitoring.
Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle
2018-01-01
The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.
Ham, Ok-Kyung; Kang, Youjeong; Teng, Helen; Lee, Yaelim; Im, Eun-Ok
2014-01-01
Background Standardized pain-intensity measurement across different tools would enable practitioners to have confidence in clinical decision-making for pain management. Objectives The purpose was to examine the degree of agreement among unidimensional pain scales, and to determine the accuracy of the multidimensional pain scales in the diagnosis of severe pain. Methods A secondary analysis was performed. The sample included a convenience sample of 480 cancer patients recruited from both the internet and community settings. Cancer pain was measured using the Verbal Descriptor Scale (VDS), the Visual Analog Scale (VAS), the Faces Pain Scale (FPS), the McGill Pain Questionnaire-Short Form (MPQ-SF) and the Brief Pain Inventory-Short Form (BPI-SF). Data were analyzed using a multivariate analysis of variance (MANOVA) and a receiver operating characteristics (ROC) curve. Results The agreement between the VDS and VAS was 77.25%, while the agreement was 71.88% and 71.60% between the VDS and FPS, and VAS and FPS, respectively. The MPQ-SF and BPI-SF yielded high accuracy in the diagnosis of severe pain. Cutoff points for severe pain were > 8 for the MPQ-SF and > 14 for the BPI-SF, which exhibited high sensitivity and relatively low specificity. Conclusion The study found substantial agreement between the unidimensional pain scales, and high accuracy of the MPQ-SF and the BPI-SF in the diagnosis of severe pain. Implications for Practice Use of one or more pain screening tools that have been validated diagnostic accuracy and consistency will help classify pain effectively and subsequently promote optimal pain control in multi-ethnic groups of cancer patients. PMID:25068188
Arai, Kaoru; Takano, Ayumi; Nagata, Takako; Hirabayashi, Naotsugu
2017-12-01
Most structured assessment tools for assessing risk of violence were developed in Western countries, and evidence for their effectiveness is not well established in Asian countries. Our aim was to examine the predictive accuracy of the Historical-Clinical-Risk Management-20 (HCR-20) for violence in forensic mental health inpatient units in Japan. A retrospective record study was conducted with a complete 2008-2013 cohort of forensic psychiatric inpatients at the National Center Hospital of Neurology and Psychiatry, Tokyo. Forensic psychiatrists were trained in use of the HCR-20 and asked to complete it as part of their admission assessment. The completed forms were then retained by the researchers and not used in clinical practice; for this, clinicians relied solely on national legally required guidelines. Violent outcomes were determined at 3 and 6 months after the assessment. Receiver operating characteristic analysis was used to calculate the predictive accuracy of the HCR-20 for violence. Area under the curve analyses suggested that the HCR-20 total score is a good predictor of violence in this cohort, with the clinical and risk sub-scales showing good predictive accuracy, but the historical sub-scale not doing so. Area under the curve figures were similar at 3 months and at 6 months. Our results are consistent with studies previously conducted in Western countries. This suggests that the HCR-20 is an effective tool for supporting risk of violence assessment in Japanese forensic psychiatric wards. Its widespread use in clinical practice could enhance safety and would certainly promote transparency in risk-related decision-making. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Comparative Study with RapidMiner and WEKA Tools over some Classification Techniques for SMS Spam
NASA Astrophysics Data System (ADS)
Foozy, Cik Feresa Mohd; Ahmad, Rabiah; Faizal Abdollah, M. A.; Chai Wen, Chuah
2017-08-01
SMS Spamming is a serious attack that can manipulate the use of the SMS by spreading the advertisement in bulk. By sending the unwanted SMS that contain advertisement can make the users feeling disturb and this against the privacy of the mobile users. To overcome these issues, many studies have proposed to detect SMS Spam by using data mining tools. This paper will do a comparative study using five machine learning techniques such as Naïve Bayes, K-NN (K-Nearest Neighbour Algorithm), Decision Tree, Random Forest and Decision Stumps to observe the accuracy result between RapidMiner and WEKA for dataset SMS Spam UCI Machine Learning repository.
Lu, Li; Liu, Shusheng; Shi, Shenggen; Yang, Jianzhong
2011-10-01
China-made 5-axis simultaneous contouring CNC machine tool and domestically developed industrial computer-aided manufacture (CAM) technology were used for full crown fabrication and measurement of crown accuracy, with an attempt to establish an open CAM system for dental processing and to promote the introduction of domestic dental computer-aided design (CAD)/CAM system. Commercially available scanning equipment was used to make a basic digital tooth model after preparation of crown, and CAD software that comes with the scanning device was employed to design the crown by using domestic industrial CAM software to process the crown data in order to generate a solid model for machining purpose, and then China-made 5-axis simultaneous contouring CNC machine tool was used to complete machining of the whole crown and the internal accuracy of the crown internal was measured by using 3D-MicroCT. The results showed that China-made 5-axis simultaneous contouring CNC machine tool in combination with domestic industrial CAM technology can be used for crown making and the crown was well positioned in die. The internal accuracy was successfully measured by using 3D-MicroCT. It is concluded that an open CAM system for dentistry on the basis of China-made 5-axis simultaneous contouring CNC machine tool and domestic industrial CAM software has been established, and development of the system will promote the introduction of domestically-produced dental CAD/CAM system.
Researchermap: a tool for visualizing author locations using Google maps.
Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong
2013-01-01
We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.
Digitizing the Facebow: A Clinician/Technician Communication Tool.
Kalman, Les; Chrapka, Julia; Joseph, Yasmin
2016-01-01
Communication between the clinician and the technician has been an ongoing problem in dentistry. To improve the issue, a dental software application has been developed--the Virtual Facebow App. It is an alternative to the traditional analog facebow, used to orient the maxillary cast in mounting. Comparison data of the two methods indicated that the digitized virtual facebow provided increased efficiency in mounting, increased accuracy in occlusion, and lower cost. Occlusal accuracy, lab time, and total time were statistically significant (P<.05). The virtual facebow provides a novel alternative for cast mounting and another tool for clinician-technician communication.
Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services
ERIC Educational Resources Information Center
Wang, Guoquan
2013-01-01
High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob
2012-01-01
Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.
Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob
2012-01-01
Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.
Leonardi Dutra, Kamile; Haas, Letícia; Porporatti, André Luís; Flores-Mir, Carlos; Nascimento Santos, Juliana; Mezzomo, Luis André; Corrêa, Márcio; De Luca Canto, Graziela
2016-03-01
Endodontic diagnosis depends on accurate radiographic examination. Assessment of the location and extent of apical periodontitis (AP) can influence treatment planning and subsequent treatment outcomes. Therefore, this systematic review and meta-analysis assessed the diagnostic accuracy of conventional radiography and cone-beam computed tomographic (CBCT) imaging on the discrimination of AP from no lesion. Eight electronic databases with no language or time limitations were searched. Articles in which the primary objective was to evaluate the accuracy (sensitivity and specificity) of any type of radiographic technique to assess AP in humans were selected. The gold standard was the histologic examination for actual AP (in vivo) or in situ visualization of bone defects for induced artificial AP (in vitro). Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v.5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark) and MetaDisc v.1.4. software (Unit of Clinical Biostatistics Team of the Ramón y Cajal Hospital, Madrid, Spain). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. Only 9 studies met the inclusion criteria and were subjected to a qualitative analysis. A meta-analysis was conducted on 6 of these articles. All of these articles studied artificial AP with induced bone defects. The accuracy values (area under the curve) were 0.96 for CBCT imaging, 0.73 for conventional periapical radiography, and 0.72 for digital periapical radiography. No evidence was found for panoramic radiography. Periapical radiographs (digital and conventional) reported good diagnostic accuracy on the discrimination of artificial AP from no lesions, whereas CBCT imaging showed excellent accuracy values. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Farroni, Flavio; Lamberti, Raffaele; Mancinelli, Nicolò; Timpone, Francesco
2018-03-01
Tyres play a key role in ground vehicles' dynamics because they are responsible for traction, braking and cornering. A proper tyre-road interaction model is essential for a useful and reliable vehicle dynamics model. In the last two decades Pacejka's Magic Formula (MF) has become a standard in simulation field. This paper presents a Tool, called TRIP-ID (Tyre Road Interaction Parameters IDentification), developed to characterize and to identify with a high grade of accuracy and reliability MF micro-parameters from experimental data deriving from telemetry or from test rig. The tool guides interactively the user through the identification process on the basis of strong diagnostic considerations about the experimental data made evident by the tool itself. A motorsport application of the tool is shown as a case study.
Assessment of the accuracy and stability of frameless gamma knife radiosurgery.
Chung, Hyun-Tai; Park, Woo-Yoon; Kim, Tae Hoon; Kim, Yong Kyun; Chun, Kook Jin
2018-06-03
The aim of this study was to assess the accuracy and stability of frameless gamma knife radiosurgery (GKRS). The accuracies of the radiation isocenter and patient couch movement were evaluated by film dosimetry with a half-year cycle. Radiation isocenter assessment with a diode detector and cone-beam computed tomography (CBCT) image accuracy tests were performed daily with a vendor-provided tool for one and a half years after installation. CBCT image quality was examined twice a month with a phantom. The accuracy of image coregistration using CBCT images was studied using magnetic resonance (MR) and computed tomography (CT) images of another phantom. The overall positional accuracy was measured in whole procedure tests using film dosimetry with an anthropomorphic phantom. The positional errors of the radiation isocenter at the center and at an extreme position were both less than 0.1 mm. The three-dimensional deviation of the CBCT coordinate system was stable for one and a half years (mean 0.04 ± 0.02 mm). Image coregistration revealed a difference of 0.2 ± 0.1 mm between CT and CBCT images and a deviation of 0.4 ± 0.2 mm between MR and CBCT images. The whole procedure test of the positional accuracy of the mask-based irradiation revealed an accuracy of 0.5 ± 0.6 mm. The radiation isocenter accuracy, patient couch movement accuracy, and Gamma Knife Icon CBCT accuracy were all approximately 0.1 mm and were stable for one and a half years. The coordinate system assigned to MR images through coregistration was more accurate than the system defined by fiducial markers. Possible patient motion during irradiation should be considered when evaluating the overall accuracy of frameless GKRS. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Davidson, Josy; dos Santos, Amelia Miyashiro N; Garcia, Kessey Maria B; Yi, Liu C; João, Priscila C; Miyoshi, Milton H; Goulart, Ana Lucia
2012-09-01
To analyse the accuracy and reproducibility of photogrammetry in detecting thoracic abnormalities in infants born prematurely. Cross-sectional study. The Premature Clinic at the Federal University of São Paolo. Fifty-eight infants born prematurely in their first year of life. Measurement of the manubrium/acromion/trapezius angle (degrees) and the deepest thoracic retraction (cm). Digitised photographs were analysed by two blinded physiotherapists using a computer program (SAPO; http://SAPO.incubadora.fapesp.br) to detect shoulder elevation and thoracic retraction. Physical examinations performed independently by two physiotherapists were used to assess the accuracy of the new tool. Thoracic alterations were detected in 39 (67%) and in 40 (69%) infants by Physiotherapists 1 and 2, respectively (kappa coefficient=0.80). Using a receiver operating characteristic curve, measurement of the manubrium/acromion/trapezius angle and the deepest thoracic retraction indicated accuracy of 0.79 and 0.91, respectively. For measurement of the manubrium/acromion/trapezius angle, the Bland and Altman limits of agreement were -6.22 to 7.22° [mean difference (d)=0.5] for repeated measures by one physiotherapist, and -5.29 to 5.79° (d=0.75) between two physiotherapists. For thoracic retraction, the intra-rater limits of agreement were -0.14 to 0.18cm (d=0.02) and the inter-rater limits of agreement were -0.20 to -0.17cm (d=0.02). SAPO provided an accurate and reliable tool for the detection of thoracic abnormalities in preterm infants. Copyright © 2011 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
de Albuquerque, Priscila Maria Nascimento Martins; de Alencar, Geisa Guimarães; de Oliveira, Daniela Araújo; de Siqueira, Gisela Rocha
2018-01-01
The aim of this study was to examine and interpret the concordance, accuracy, and reliability of photogrammetric protocols available in the literature for evaluating cervical lordosis in an adult population aged 18 to 59 years. A systematic search of 6 electronic databases (MEDLINE via PubMed, LILACS, CINAHL, Scopus, ScienceDirect, and Web of Science) located studies that assessed the reliability and/or concordance and/or accuracy of photogrammetric protocols for evaluating cervical lordosis, compared with radiography. Articles published through April 2016 were selected. Two independent reviewers used a critical appraisal tool (QUADAS and QAREL) to assess the quality of the selected studies. Two studies were included in the review and had high levels of reliability (intraclass correlation coefficient: 0.974-0.98). Only 1 study assessed the concordance between the methods, which was calculated using Pearson's correlation coefficient. To date, the accuracy of photogrammetry has not been investigated thoroughly. We encountered no study in the literature that investigated the accuracy of photogrammetry in diagnosing hyperlordosis of cervical spine. However, both current studies report high levels of intra- and interrater reliability. To increase the level of evidence of photogrammetry in the evaluation of cervical lordosis, it is necessary to conduct further studies using a larger sample to increase the external validity of the findings. Copyright © 2018. Published by Elsevier Inc.
Diagnostic validity of methods for assessment of swallowing sounds: a systematic review.
Taveira, Karinna Veríssimo Meira; Santos, Rosane Sampaio; Leão, Bianca Lopes Cavalcante de; Neto, José Stechman; Pernambuco, Leandro; Silva, Letícia Korb da; De Luca Canto, Graziela; Porporatti, André Luís
2018-02-03
Oropharyngeal dysphagia is a highly prevalent comorbidity in neurological patients and presents a serious health threat, which may lead to outcomes of aspiration pneumonia, ranging from hospitalization to death. This assessment proposes a non-invasive, acoustic-based method to differentiate between individuals with and without signals of penetration and aspiration. This systematic review evaluated the diagnostic validity of different methods for assessment of swallowing sounds, when compared to Videofluroscopic of Swallowing Study (VFSS) to detect oropharyngeal dysphagia. Articles in which the primary objective was to evaluate the accuracy of swallowing sounds were searched in five electronic databases with no language or time limitations. Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v. 5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. The final electronic search revealed 554 records, however only 3 studies met the inclusion criteria. The accuracy values (area under the curve) were 0.94 for microphone, 0.80 for Doppler, and 0.60 for stethoscope. Based on limited evidence and low methodological quality because few studies were included, with a small sample size, from all index testes found for this systematic review, Doppler showed excellent diagnostic accuracy for the discrimination of swallowing sounds, whereas microphone-reported good accuracy discrimination of swallowing sounds of dysphagic patients and stethoscope showed best screening test. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Rodrigues, Ramon Gouveia; das Dores, Rafael Marques; Camilo-Junior, Celso G; Rosa, Thierson Couto
2016-01-01
Cancer is a critical disease that affects millions of people and families around the world. In 2012 about 14.1 million new cases of cancer occurred globally. Because of many reasons like the severity of some cases, the side effects of some treatments and death of other patients, cancer patients tend to be affected by serious emotional disorders, like depression, for instance. Thus, monitoring the mood of the patients is an important part of their treatment. Many cancer patients are users of online social networks and many of them take part in cancer virtual communities where they exchange messages commenting about their treatment or giving support to other patients in the community. Most of these communities are of public access and thus are useful sources of information about the mood of patients. Based on that, Sentiment Analysis methods can be useful to automatically detect positive or negative mood of cancer patients by analyzing their messages in these online communities. The objective of this work is to present a Sentiment Analysis tool, named SentiHealth-Cancer (SHC-pt), that improves the detection of emotional state of patients in Brazilian online cancer communities, by inspecting their posts written in Portuguese language. The SHC-pt is a sentiment analysis tool which is tailored specifically to detect positive, negative or neutral messages of patients in online communities of cancer patients. We conducted a comparative study of the proposed method with a set of general-purpose sentiment analysis tools adapted to this context. Different collections of posts were obtained from two cancer communities in Facebook. Additionally, the posts were analyzed by sentiment analysis tools that support the Portuguese language (Semantria and SentiStrength) and by the tool SHC-pt, developed based on the method proposed in this paper called SentiHealth. Moreover, as a second alternative to analyze the texts in Portuguese, the collected texts were automatically translated into English, and submitted to sentiment analysis tools that do not support the Portuguese language (AlchemyAPI and Textalytics) and also to Semantria and SentiStrength, using the English option of these tools. Six experiments were conducted with some variations and different origins of the collected posts. The results were measured using the following metrics: precision, recall, F1-measure and accuracy The proposed tool SHC-pt reached the best averages for accuracy and F1-measure (harmonic mean between recall and precision) in the three sentiment classes addressed (positive, negative and neutral) in all experimental settings. Moreover, the worst accuracy value (58%) achieved by SHC-pt in any experiment is 11.53% better than the greatest accuracy (52%) presented by other addressed tools. Finally, the worst average F1 (48.46%) reached by SHC-pt in any experiment is 4.14% better than the greatest average F1 (46.53%) achieved by other addressed tools. Thus, even when we compare the SHC-pt results in complex scenario versus others in easier scenario the SHC-pt is better. This paper presents two contributions. First, it proposes the method SentiHealth to detect the mood of cancer patients that are also users of communities of patients in online social networks. Second, it presents an instantiated tool from the method, called SentiHealth-Cancer (SHC-pt), dedicated to automatically analyze posts in communities of cancer patients, based on SentiHealth. This context-tailored tool outperformed other general-purpose sentiment analysis tools at least in the cancer context. This suggests that the SentiHealth method could be instantiated as other disease-based tools during future works, for instance SentiHealth-HIV, SentiHealth-Stroke and SentiHealth-Sclerosis. Copyright © 2015. Published by Elsevier Ireland Ltd.
Campbell, Amelia; Owen, Rebecca; Brown, Elizabeth; Pryor, David; Bernard, Anne; Lehman, Margot
2015-08-01
Cone beam computerised tomography (CBCT) enables soft tissue visualisation to optimise matching in the post-prostatectomy setting, but is associated with inter-observer variability. This study assessed the accuracy and consistency of automated soft tissue localisation using XVI's dual registration tool (DRT). Sixty CBCT images from ten post-prostatectomy patients were matched using: (i) the DRT and (ii) manual soft tissue registration by six radiation therapists (RTs). Shifts in the three Cartesian planes were recorded. The accuracy of the match was determined by comparing shifts to matches performed by two genitourinary radiation oncologists (ROs). A Bland-Altman method was used to assess the 95% levels of agreement (LoA). A clinical threshold of 3 mm was used to define equivalence between methods of matching. The 95% LoA between DRT-ROs in the superior/inferior, left/right and anterior/posterior directions were -2.21 to +3.18 mm, -0.77 to +0.84 mm, and -1.52 to +4.12 mm, respectively. The 95% LoA between RTs-ROs in the superior/inferior, left/right and anterior/posterior directions were -1.89 to +1.86 mm, -0.71 to +0.62 mm and -2.8 to +3.43 mm, respectively. Five DRT CBCT matches (8.33%) were outside the 3-mm threshold, all in the setting of bladder underfilling or rectal gas. The mean time for manual matching was 82 versus 65 s for DRT. XVI's DRT is comparable with RTs manually matching soft tissue on CBCT. The DRT can minimise RT inter-observer variability; however, involuntary bladder and rectal filling can influence the tools accuracy, highlighting the need for RT evaluation of the DRT match. © 2015 The Royal Australian and New Zealand College of Radiologists.
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Raharimalala, F N; Andrianinarivomanana, T M; Rakotondrasoa, A; Collard, J M; Boyer, S
2017-09-01
Arthropod-borne diseases are important causes of morbidity and mortality. The identification of vector species relies mainly on morphological features and/or molecular biology tools. The first method requires specific technical skills and may result in misidentifications, and the second method is time-consuming and expensive. The aim of the present study is to assess the usefulness and accuracy of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) as a supplementary tool with which to identify mosquito vector species and to invest in the creation of an international database. A total of 89 specimens belonging to 10 mosquito species were selected for the extraction of proteins from legs and for the establishment of a reference database. A blind test with 123 mosquitoes was performed to validate the MS method. Results showed that: (a) the spectra obtained in the study with a given species differed from the spectra of the same species collected in another country, which highlights the need for an international database; (b) MALDI-TOF MS is an accurate method for the rapid identification of mosquito species that are referenced in a database; (c) MALDI-TOF MS allows the separation of groups or complex species, and (d) laboratory specimens undergo a loss of proteins compared with those isolated in the field. In conclusion, MALDI-TOF MS is a useful supplementary tool for mosquito identification and can help inform vector control. © 2017 The Royal Entomological Society.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Validation of hand and foot anatomical feature measurements from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application, previously presented as a tool for individuals with hand arthritis to assess and monitor the progress of their disease, has been modified and expanded to include extraction of anatomical features from the hand (joint/finger width, and angulation) and foot (length, width, big toe angle, and arch height index) from smartphone camera images. Image processing algorithms and automated measurements were validated by performing tests on digital hand models, rigid plastic hand models, and real human hands and feet to determine accuracy and reproducibility compared to conventional measurement tools such as calipers, rulers, and goniometers. The mobile application was able to provide finger joint width measurements with accuracy better than 0.34 (+/-0.25) millimeters. Joint angulation measurement accuracy was better than 0.50 (+/-0.45) degrees. The automatically calculated foot length accuracy was 1.20 (+/-1.27) millimeters and the foot width accuracy was 1.93 (+/-1.92) millimeters. Hallux valgus angle (used in assessing bunions) accuracy was 1.30 (+/-1.29) degrees. Arch height index (AHI) measurements had an accuracy of 0.02 (+/-0.01). Combined with in-app documentation of symptoms, treatment, and lifestyle factors, the anatomical feature measurements can be used by both healthcare professionals and manufacturers. Applications include: diagnosing hand osteoarthritis; providing custom finger splint measurements; providing compression glove measurements for burn and lymphedema patients; determining foot dimensions for custom shoe sizing, insoles, orthotics, or foot splints; and assessing arch height index and bunion treatment effectiveness.
Image-based deep learning for classification of noise transients in gravitational wave detectors
NASA Astrophysics Data System (ADS)
Razzano, Massimiliano; Cuoco, Elena
2018-05-01
The detection of gravitational waves has inaugurated the era of gravitational astronomy and opened new avenues for the multimessenger study of cosmic sources. Thanks to their sensitivity, the Advanced LIGO and Advanced Virgo interferometers will probe a much larger volume of space and expand the capability of discovering new gravitational wave emitters. The characterization of these detectors is a primary task in order to recognize the main sources of noise and optimize the sensitivity of interferometers. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. Deep learning techniques are a promising tool for the recognition and classification of glitches. We present a classification pipeline that exploits convolutional neural networks to classify glitches starting from their time-frequency evolution represented as images. We evaluated the classification accuracy on simulated glitches, showing that the proposed algorithm can automatically classify glitches on very fast timescales and with high accuracy, thus providing a promising tool for online detector characterization.
Bauer, Lyndsey; O'Bryant, Sid E; Lynch, Julie K; McCaffrey, Robert J; Fisher, Jerid M
2007-09-01
Assessing effort level during neuropsychological evaluations is critical to support the accuracy of cognitive test scores. Many instruments are designed to measure effort, yet they are not routinely administered in neuropsychological assessments. The Test of Memory Malingering (TOMM) and the Word Memory Test (WMT) are commonly administered symptom validity tests with sound psychometric properties. This study examines the use of the TOMM Trial 1 and the WMT Immediate Recognition (IR) trial scores as brief screening tools for insufficient effort through an archival analysis of a combined sample of mild head-injury litigants ( N = 105) who were assessed in forensic private practices. Results show that both demonstrate impressive diagnostic accuracy and calculations of positive and negative predictive power are presented for a range of base rates. These results support the utility of Trial 1 of the TOMM and the WMT IR trial as screening methods for the assessment of insufficient effort in neuropsychological assessments.
NASA Astrophysics Data System (ADS)
Palacz, M.; Haida, M.; Smolka, J.; Nowak, A. J.; Hafner, A.
2016-09-01
In this study, the comparison of the accuracy of the homogeneous equilibrium model (HEM) and homogeneous relaxation model (HRM) is presented. Both models were applied to simulate the CO2 expansion inside the two-phase ejectors. Moreover, the mentioned models were implemented in the robust and efficient computational tool ejectorPL. That tool guarantees the fully automated computational process and the repeatable computations for the various ejector shapes and operating conditions. The simulated motive nozzle mass flow rates were compared to the experimentally measured mass flow rates. That comparison was made for both, HEM and HRM. The results showed the unsatisfying fidelity of the HEM for the operating regimes far from the carbon dioxide critical point. On the other hand, the HRM accuracy for such conditions was slightly higher. The approach presented in this paper, showed the limitation of applicability of both two-phase models for the expansion phenomena inside the ejectors.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Bahia, Valéria S; Cecchini, Mário A; Cassimiro, Luciana; Viana, Rene; Lima-Silva, Thais B; de Souza, Leonardo Cruz; Carvalho, Viviane Amaral; Guimarães, Henrique C; Caramelli, Paulo; Balthazar, Márcio L F; Damasceno, Benito; Brucki, Sônia M D; Nitrini, Ricardo; Yassuda, Mônica S
2018-05-04
Executive dysfunction is a common symptom in neurodegenerative disorders and is in need of easy-to-apply screening tools that might identify it. The aims of the present study were to examine some of the psychometric characteristics of the Brazilian version of the INECO frontal screening (IFS), and to investigate its accuracy to diagnose executive dysfunction in dementia and its accuracy to differentiate Alzheimer disease (AD) from the behavioral variant of frontotemporal dementia (bvFTD). Patients diagnosed with bvFTD (n=18) and AD (n=20), and 15 healthy controls completed a neuropsychological battery, the Neuropsychiatric Inventory, the Cornell Scale for Depression in Dementia, the Clinical Dementia Rating, and the IFS. The IFS had acceptable internal consistency (α=0.714) and was significantly correlated with general cognitive measures and with neuropsychological tests. The IFS had adequate accuracy to differentiate patients with dementia from healthy controls (AUC=0.768, cutoff=19.75, sensitivity=0.80, specificity=0.63), but low accuracy to differentiate bvFTD from AD (AUC=0.594, cutoff=16.75, sensitivity=0.667, specificity=0.600). The present study suggested that the IFS may be used to screen for executive dysfunction in dementia. Nonetheless, it should be used with caution in the differential diagnosis between AD and bvFTD.
NASA Technical Reports Server (NTRS)
Roberts, Rodney G.; LopezdelCastillo, Eduardo
1996-01-01
The goal of the project was to develop the necessary analysis tools for a feasibility study of a cable suspended robot system for examining the space shuttle orbiter payload bay radiators These tools were developed to address design issues such as workspace size, tension requirements on the cable, the necessary accuracy and resolution requirements and the stiffness and movement requirements of the system. This report describes the mathematical models for studying the inverse kinematics, statics, and stiffness of the robot. Each model is described by a matrix. The manipulator Jacobian was also related to the stiffness matrix, which characterized the stiffness of the system. Analysis tools were then developed based on the singular value decomposition (SVD) of the corresponding matrices. It was demonstrated how the SVD can be used to quantify the robot's performance and to provide insight into different design issues.
Pepin, K M; Spackman, E; Brown, J D; Pabilonia, K L; Garber, L P; Weaver, J T; Kennedy, D A; Patyk, K A; Huyvaert, K P; Miller, R S; Franklin, A B; Pedersen, K; Bogich, T L; Rohani, P; Shriner, S A; Webb, C T; Riley, S
2014-03-01
Wild birds are the primary source of genetic diversity for influenza A viruses that eventually emerge in poultry and humans. Much progress has been made in the descriptive ecology of avian influenza viruses (AIVs), but contributions are less evident from quantitative studies (e.g., those including disease dynamic models). Transmission between host species, individuals and flocks has not been measured with sufficient accuracy to allow robust quantitative evaluation of alternate control protocols. We focused on the United States of America (USA) as a case study for determining the state of our quantitative knowledge of potential AIV emergence processes from wild hosts to poultry. We identified priorities for quantitative research that would build on existing tools for responding to AIV in poultry and concluded that the following knowledge gaps can be addressed with current empirical data: (1) quantification of the spatio-temporal relationships between AIV prevalence in wild hosts and poultry populations, (2) understanding how the structure of different poultry sectors impacts within-flock transmission, (3) determining mechanisms and rates of between-farm spread, and (4) validating current policy-decision tools with data. The modeling studies we recommend will improve our mechanistic understanding of potential AIV transmission patterns in USA poultry, leading to improved measures of accuracy and reduced uncertainty when evaluating alternative control strategies. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Jonas, Daniel E; Amick, Halle R; Feltner, Cynthia; Weber, Rachel Palmieri; Arvanitis, Marina; Stine, Alexander; Lux, Linda; Harris, Russell P
2017-01-24
Many adverse health outcomes are associated with obstructive sleep apnea (OSA). To review primary care-relevant evidence on screening adults for OSA, test accuracy, and treatment of OSA, to inform the US Preventive Services Task Force. MEDLINE, Cochrane Library, EMBASE, and trial registries through October 2015, references, and experts, with surveillance of the literature through October 5, 2016. English-language randomized clinical trials (RCTs); studies evaluating accuracy of screening questionnaires or prediction tools, diagnostic accuracy of portable monitors, or association between apnea-hypopnea index (AHI) and health outcomes among community-based participants. Two investigators independently reviewed abstracts and full-text articles. When multiple similar studies were available, random-effects meta-analyses were conducted. Sensitivity, specificity, area under the curve (AUC), AHI, Epworth Sleepiness Scale (ESS) scores, blood pressure, mortality, cardiovascular events, motor vehicle crashes, quality of life, and harms. A total of 110 studies were included (N = 46 188). No RCTs compared screening with no screening. In 2 studies (n = 702), the screening accuracy of the multivariable apnea prediction score followed by home portable monitor testing for detecting severe OSA syndrome (AHI ≥30 and ESS score >10) was AUC 0.80 (95% CI, 0.78 to 0.82) and 0.83 (95% CI, 0.77 to 0.90), respectively, but the studies oversampled high-risk participants and those with OSA and OSA syndrome. No studies prospectively evaluated screening tools to report calibration or clinical utility for improving health outcomes. Meta-analysis found that continuous positive airway pressure (CPAP) compared with sham was significantly associated with reduction of AHI (weighted mean difference [WMD], -33.8 [95% CI, -42.0 to -25.6]; 13 trials, 543 participants), excessive sleepiness assessed by ESS score (WMD, -2.0 [95% CI, -2.6 to -1.4]; 22 trials, 2721 participants), diurnal systolic blood pressure (WMD, -2.4 points [95% CI, -3.9 to -0.9]; 15 trials, 1190 participants), and diurnal diastolic blood pressure (WMD, -1.3 points [95% CI, -2.2 to -0.4]; 15 trials, 1190 participants). CPAP was associated with modest improvement in sleep-related quality of life (Cohen d, 0.28 [95% CI, 0.14 to 0.42]; 13 trials, 2325 participants). Mandibular advancement devices (MADs) and weight loss programs were also associated with reduced AHI and excessive sleepiness. Common adverse effects of CPAP and MADs included oral or nasal dryness, irritation, and pain, among others. In cohort studies, there was a consistent association between AHI and all-cause mortality. There is uncertainty about the accuracy or clinical utility of all potential screening tools. Multiple treatments for OSA reduce AHI, ESS scores, and blood pressure. Trials of CPAP and other treatments have not established whether treatment reduces mortality or improves most other health outcomes, except for modest improvement in sleep-related quality of life.
Contingency Table Browser - prediction of early stage protein structure.
Kalinowska, Barbara; Krzykalski, Artur; Roterman, Irena
2015-01-01
The Early Stage (ES) intermediate represents the starting structure in protein folding simulations based on the Fuzzy Oil Drop (FOD) model. The accuracy of FOD predictions is greatly dependent on the accuracy of the chosen intermediate. A suitable intermediate can be constructed using the sequence-structure relationship information contained in the so-called contingency table - this table expresses the likelihood of encountering various structural motifs for each tetrapeptide fragment in the amino acid sequence. The limited accuracy with which such structures could previously be predicted provided the motivation for a more indepth study of the contingency table itself. The Contingency Table Browser is a tool which can visualize, search and analyze the table. Our work presents possible applications of Contingency Table Browser, among them - analysis of specific protein sequences from the point of view of their structural ambiguity.
Knowledge Mapping: A Multipurpose Task Analysis Tool.
ERIC Educational Resources Information Center
Esque, Timm J.
1988-01-01
Describes knowledge mapping, a tool developed to increase the objectivity and accuracy of task difficulty ratings for job design. Application in a semiconductor manufacturing environment is discussed, including identifying prerequisite knowledge for a given task; establishing training development priorities; defining knowledge levels; identifying…
Managing complex research datasets using electronic tools: A meta-analysis exemplar
Brown, Sharon A.; Martin, Ellen E.; Garcia, Theresa J.; Winter, Mary A.; García, Alexandra A.; Brown, Adama; Cuevas, Heather E.; Sumlin, Lisa L.
2013-01-01
Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, e.g., EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process, as well as enhancing communication among research team members. The purpose of this paper is to describe the electronic processes we designed, using commercially available software, for an extensive quantitative model-testing meta-analysis we are conducting. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to: decide on which electronic tools to use, determine how these tools would be employed, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members. PMID:23681256
Managing complex research datasets using electronic tools: a meta-analysis exemplar.
Brown, Sharon A; Martin, Ellen E; Garcia, Theresa J; Winter, Mary A; García, Alexandra A; Brown, Adama; Cuevas, Heather E; Sumlin, Lisa L
2013-06-01
Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, for example, EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process as well as enhancing communication among research team members. The purpose of this article is to describe the electronic processes designed, using commercially available software, for an extensive, quantitative model-testing meta-analysis. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to decide on which electronic tools to use, determine how these tools would be used, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members.
Accuracy and reliability of peer assessment of athletic training psychomotor laboratory skills.
Marty, Melissa C; Henning, Jolene M; Willse, John T
2010-01-01
Peer assessment is defined as students judging the level or quality of a fellow student's understanding. No researchers have yet demonstrated the accuracy or reliability of peer assessment in athletic training education. To determine the accuracy and reliability of peer assessment of athletic training students' psychomotor skills. Cross-sectional study. Entry-level master's athletic training education program. First-year (n = 5) and second-year (n = 8) students. Participants evaluated 10 videos of a peer performing 3 psychomotor skills (middle deltoid manual muscle test, Faber test, and Slocum drawer test) on 2 separate occasions using a valid assessment tool. Accuracy of each peer-assessment score was examined through percentage correct scores. We used a generalizability study to determine how reliable athletic training students were in assessing a peer performing the aforementioned skills. Decision studies using generalizability theory demonstrated how the peer-assessment scores were affected by the number of participants and number of occasions. Participants had a high percentage of correct scores: 96.84% for the middle deltoid manual muscle test, 94.83% for the Faber test, and 97.13% for the Slocum drawer test. They were not able to reliably assess a peer performing any of the psychomotor skills on only 1 occasion. However, the φ increased (exceeding the 0.70 minimal standard) when 2 participants assessed the skill on 3 occasions (φ = 0.79) for the Faber test, with 1 participant on 2 occasions (φ = 0.76) for the Slocum drawer test, and with 3 participants on 2 occasions for the middle deltoid manual muscle test (φ = 0.72). Although students did not detect all errors, they assessed their peers with an average of 96% accuracy. Having only 1 student assess a peer performing certain psychomotor skills was less reliable than having more than 1 student assess those skills on more than 1 occasion. Peer assessment of psychomotor skills could be an important part of the learning process and a tool to supplement instructor assessment.
Framework for objective evaluation of privacy filters
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Melle, Andrea; Dugelay, Jean-Luc; Ebrahimi, Touradj
2013-09-01
Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets.
Krishnan, Neeraja M.; Gaur, Prakhar; Chaudhary, Rakshit; Rao, Arjun A.; Panda, Binay
2012-01-01
Copy Number Alterations (CNAs) such as deletions and duplications; compose a larger percentage of genetic variations than single nucleotide polymorphisms or other structural variations in cancer genomes that undergo major chromosomal re-arrangements. It is, therefore, imperative to identify cancer-specific somatic copy number alterations (SCNAs), with respect to matched normal tissue, in order to understand their association with the disease. We have devised an accurate, sensitive, and easy-to-use tool, COPS, COpy number using Paired Samples, for detecting SCNAs. We rigorously tested the performance of COPS using short sequence simulated reads at various sizes and coverage of SCNAs, read depths, read lengths and also with real tumor:normal paired samples. We found COPS to perform better in comparison to other known SCNA detection tools for all evaluated parameters, namely, sensitivity (detection of true positives), specificity (detection of false positives) and size accuracy. COPS performed well for sequencing reads of all lengths when used with most upstream read alignment tools. Additionally, by incorporating a downstream boundary segmentation detection tool, the accuracy of SCNA boundaries was further improved. Here, we report an accurate, sensitive and easy to use tool in detecting cancer-specific SCNAs using short-read sequence data. In addition to cancer, COPS can be used for any disease as long as sequence reads from both disease and normal samples from the same individual are available. An added boundary segmentation detection module makes COPS detected SCNA boundaries more specific for the samples studied. COPS is available at ftp://115.119.160.213 with username “cops” and password “cops”. PMID:23110103
Active depth-guiding handheld micro-forceps for membranectomy based on CP-SSOCT
NASA Astrophysics Data System (ADS)
Cheon, Gyeong Woo; Lee, Phillip; Gonenc, Berk; Gehlbach, Peter L.; Kang, Jin U.
2016-03-01
In this study, we demonstrate a handheld motion-compensated micro-forceps system using common-path swept source optical coherence tomography with highly accurate depth-targeting and depth-locking for Epiretinal Membrane Peeling. Two motors and a touch sensor were used to separate the two independent motions: motion compensation and tool-tip manipulation. A smart motion monitoring and guiding algorithm was devised for precise and intuitive freehand control. Ex-vivo bovine eye experiments were performed to evaluate accuracy in a bovine retina retinal membrane peeling model. The evaluation demonstrates system capabilities of 40 um accuracy when peeling the epithelial layer of bovine retina.
Improvement of CD-SEM mark position measurement accuracy
NASA Astrophysics Data System (ADS)
Kasa, Kentaro; Fukuhara, Kazuya
2014-04-01
CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.
Xuan, Min; Zhou, Fengsheng; Ding, Yan; Zhu, Qiaoying; Dong, Ji; Zhou, Hao; Cheng, Jun; Jiang, Xiao; Wu, Pengxi
2018-04-01
To review the diagnostic accuracy of contrast-enhanced ultrasound (CEUS) used to detect residual or recurrent liver tumors after radiofrequency ablation (RFA). This technique uses contrast-enhanced computer tomography or/and contrast-enhanced magnetic resonance imaging as the gold standard of investigation. MEDLINE, EMBASE, and COCHRANE were systematically searched for all potentially eligible studies comparing CEUS with the reference standard that follows RFA. Risk of bias and applicability concerns were addressed by adopting the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Pooled point estimates for sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratios (DOR) with 95% CI were computed before plotting the sROC (summary receiver operating characteristic) curve. Meta-regression and subgroup analysis were used to identify the source of the heterogeneity that was detected. Publication bias was evaluated using Deeks' funnel plot asymmetry test. Ten eligible studies on 1162 lesions that occurred between 2001 and 2016 were included in the final analysis. The quality of the included studies assessed by the QUADAS-2 tool was considered reasonable. The pooled sensitivity and specificity of CEUS in detecting residual or recurrent liver tumors had the following values: 0.90 (95% CI 0.85-0.94) and 1.00 (95% CI 0.99-1.00), respectively. Overall DOR was 420.10 (95% CI 142.30-1240.20). The sources of heterogeneity could not be precisely identified by meta-regression or subgroup analysis. No evidence of publication bias was found. This study confirmed that CEUS exhibits high sensitivity and specificity in assessing therapeutic responses to RFA for liver tumors.
Condition assessment of timber bridges. 2, Evaluation of several stress-wave tools
Brian K. Brashaw; Robert J. Vatalaro; James P. Wacker; Robert J. Ross
2005-01-01
This study was conducted to evaluate the accuracy and reliability of several stress-wave devices widely used for locating deteriorated areas in timber bridge members. Bridge components containing different levels of natural decay were tested using various devices. The specimens were then sawn (along their length) into slabs to expose their interior condition. The...
Price, C L; Brace-McDonnell, S J; Stallard, N; Bleetman, A; Maconochie, I; Perkins, G D
2016-05-01
Context Triage tools are an essential component of the emergency response to a major incident. Although fortunately rare, mass casualty incidents involving children are possible which mandate reliable triage tools to determine the priority of treatment. To determine the performance characteristics of five major incident triage tools amongst paediatric casualties who have sustained traumatic injuries. Retrospective observational cohort study using data from 31,292 patients aged less than 16 years who sustained a traumatic injury. Data were obtained from the UK Trauma Audit and Research Network (TARN) database. Interventions Statistical evaluation of five triage tools (JumpSTART, START, CareFlight, Paediatric Triage Tape/Sieve and Triage Sort) to predict death or severe traumatic injury (injury severity score >15). Main outcome measures Performance characteristics of triage tools (sensitivity, specificity and level of agreement between triage tools) to identify patients at high risk of death or severe injury. Of the 31,292 cases, 1029 died (3.3%), 6842 (21.9%) had major trauma (defined by an injury severity score >15) and 14,711 (47%) were aged 8 years or younger. There was variation in the performance accuracy of the tools to predict major trauma or death (sensitivities ranging between 36.4 and 96.2%; specificities 66.0-89.8%). Performance characteristics varied with the age of the child. CareFlight had the best overall performance at predicting death, with the following sensitivity and specificity (95% CI) respectively: 95.3% (93.8-96.8) and 80.4% (80.0-80.9). JumpSTART was superior for the triaging of children under 8 years; sensitivity and specificity (95% CI) respectively: 86.3% (83.1-89.5) and 84.8% (84.2-85.5). The triage tools were generally better at identifying patients who would die than those with non-fatal severe injury. This statistical evaluation has demonstrated variability in the accuracy of triage tools at predicting outcomes for children who sustain traumatic injuries. No single tool performed consistently well across all evaluated scenarios. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Detection of Malingering: A New Tool to Identify Made-Up Depression.
Monaro, Merylin; Toncini, Andrea; Ferracuti, Stefano; Tessari, Gianmarco; Vaccaro, Maria G; De Fazio, Pasquale; Pigato, Giorgio; Meneghel, Tiziano; Scarpazza, Cristina; Sartori, Giuseppe
2018-01-01
Major depression is a high-prevalence mental disease with major socio-economic impact, for both the direct and the indirect costs. Major depression symptoms can be faked or exaggerated in order to obtain economic compensation from insurance companies. Critically, depression is potentially easily malingered, as the symptoms that characterize this psychiatric disorder are not difficult to emulate. Although some tools to assess malingering of psychiatric conditions are already available, they are principally based on self-reporting and are thus easily faked. In this paper, we propose a new method to automatically detect the simulation of depression, which is based on the analysis of mouse movements while the patient is engaged in a double-choice computerized task, responding to simple and complex questions about depressive symptoms. This tool clearly has a key advantage over the other tools: the kinematic movement is not consciously controllable by the subjects, and thus it is almost impossible to deceive. Two groups of subjects were recruited for the study. The first one, which was used to train different machine-learning algorithms, comprises 60 subjects (20 depressed patients and 40 healthy volunteers); the second one, which was used to test the machine-learning models, comprises 27 subjects (9 depressed patients and 18 healthy volunteers). In both groups, the healthy volunteers were randomly assigned to the liars and truth-tellers group. Machine-learning models were trained on mouse dynamics features, which were collected during the subject response, and on the number of symptoms reported by participants. Statistical results demonstrated that individuals that malingered depression reported a higher number of depressive and non-depressive symptoms than depressed participants, whereas individuals suffering from depression took more time to perform the mouse-based tasks compared to both truth-tellers and liars. Machine-learning models reached a classification accuracy up to 96% in distinguishing liars from depressed patients and truth-tellers. Despite this, the data are not conclusive, as the accuracy of the algorithm has not been compared with the accuracy of the clinicians; this study presents a possible useful method that is worth further investigation.
Smith, Toby O; Simpson, Michael; Ejindu, Vivian; Hing, Caroline B
2013-04-01
The purpose of this study was to assess the diagnostic test accuracy of magnetic resonance imaging (MRI), magnetic resonance arthrography (MRA) and multidetector arrays in CT arthrography (MDCT) for assessing chondral lesions in the hip joint. A review of the published and unpublished literature databases was performed to identify all studies reporting the diagnostic test accuracy (sensitivity/specificity) of MRI, MRA or MDCT for the assessment of adults with chondral (cartilage) lesions of the hip with surgical comparison (arthroscopic or open) as the reference test. All included studies were reviewed using the quality assessment of diagnostic accuracy studies appraisal tool. Pooled sensitivity, specificity, likelihood ratios and diagnostic odds ratios were calculated with 95 % confidence intervals using a random-effects meta-analysis for MRI, MRA and MDCT imaging. Eighteen studies satisfied the eligibility criteria. These included 648 hips from 637 patients. MRI indicated a pooled sensitivity of 0.59 (95 % CI: 0.49-0.70) and specificity of 0.94 (95 % CI: 0.90-0.97), and MRA sensitivity and specificity values were 0.62 (95 % CI: 0.57-0.66) and 0.86 (95 % CI: 0.83-0.89), respectively. The diagnostic test accuracy for the detection of hip joint cartilage lesions is currently superior for MRI compared with MRA. There were insufficient data to perform meta-analysis for MDCT or CTA protocols. Based on the current limited diagnostic test accuracy of the use of magnetic resonance or CT, arthroscopy remains the most accurate method of assessing chondral lesions in the hip joint.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2015-01-01
Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their full potential in capturing clinical outcomes. PMID:25811838
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
NASA Astrophysics Data System (ADS)
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
USDA-ARS?s Scientific Manuscript database
Quantitative real-time polymerase chain reaction (qRT-PCR) is the most important tool in measuring levels of gene expression due to its accuracy, specificity, and sensitivity. However, the accuracy of qRT-PCR analysis strongly depends on transcript normalization using stably expressed reference gene...
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Hyperspectral analysis of columbia spotted frog habitat
Shive, J.P.; Pilliod, D.S.; Peterson, C.R.
2010-01-01
Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.
Russo, Giorgio Ivan; Regis, Federica; Castelli, Tommaso; Favilla, Vincenzo; Privitera, Salvatore; Giardina, Raimondo; Cimino, Sebastiano; Morgia, Giuseppe
2017-08-01
Markers for prostate cancer (PCa) have progressed over recent years. In particular, the prostate health index (PHI) and the 4-kallikrein (4K) panel have been demonstrated to improve the diagnosis of PCa. We aimed to review the diagnostic accuracy of PHI and the 4K panel for PCa detection. We performed a systematic literature search of PubMed, EMBASE, Cochrane, and Academic One File databases until July 2016. We included diagnostic accuracy studies that used PHI or 4K panel for the diagnosis of PCa or high-grade PCa. The methodological quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Twenty-eight studies including 16,762 patients have been included for the analysis. The pooled data showed a sensitivity of 0.89 and 0.74 for PHI and 4K panel, respectively, for PCa detection and a pooled specificity of 0.34 and 0.60 for PHI and 4K panel, respectively. The derived area under the curve (AUC) from the hierarchical summary receiver operating characteristic (HSROC) showed an accuracy of 0.76 and 0.72 for PHI and 4K panel respectively. For high-grade PCa detection, the pooled sensitivity was 0.93 and 0.87 for PHI and 4K panel, respectively, whereas the pooled specificity was 0.34 and 0.61 for PHI and 4K panel, respectively. The derived AUC from the HSROC showed an accuracy of 0.82 and 0.81 for PHI and 4K panel, respectively. Both PHI and the 4K panel provided good diagnostic accuracy in detecting overall and high-grade PCa. Copyright © 2016 Elsevier Inc. All rights reserved.
CRF: detection of CRISPR arrays using random forest.
Wang, Kai; Liang, Chun
2017-01-01
CRISPRs (clustered regularly interspaced short palindromic repeats) are particular repeat sequences found in wide range of bacteria and archaea genomes. Several tools are available for detecting CRISPR arrays in the genomes of both domains. Here we developed a new web-based CRISPR detection tool named CRF (CRISPR Finder by Random Forest). Different from other CRISPR detection tools, a random forest classifier was used in CRF to filter out invalid CRISPR arrays from all putative candidates and accordingly enhanced detection accuracy. In CRF, particularly, triplet elements that combine both sequence content and structure information were extracted from CRISPR repeats for classifier training. The classifier achieved high accuracy and sensitivity. Moreover, CRF offers a highly interactive web interface for robust data visualization that is not available among other CRISPR detection tools. After detection, the query sequence, CRISPR array architecture, and the sequences and secondary structures of CRISPR repeats and spacers can be visualized for visual examination and validation. CRF is freely available at http://bioinfolab.miamioh.edu/crf/home.php.
Some aspects of precise laser machining - Part 2: Experimental
NASA Astrophysics Data System (ADS)
Grabowski, Marcin; Wyszynski, Dominik; Ostrowski, Robert
2018-05-01
The paper describes the role of laser beam polarization on quality of laser beam machined cutting tool edge. In micromachining the preparation of the cutting tools in play a key role on dimensional accuracy, sharpness and the quality of the cutting edges. In order to assure quality and dimensional accuracy of the cutting tool edge it is necessary to apply laser polarization control. In the research diode pumped Nd:YAG 532nm pulse laser was applied. Laser beam polarization used in the research was linear (horizontal, vertical). The goal of the carried out research was to describe impact of laser beam polarization on efficiency of the cutting process and quality of machined parts (edge, surface) made of polycrystalline diamond (PCD) and cubic boron nitride (cBN). Application of precise cutting tool in micromachining has significant impact on the minimum uncut chip thickness and quality of the parts. The research was carried within the INNOLOT program funded by the National Centre for Research and Development.
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
NASA Astrophysics Data System (ADS)
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
Michelessi, Manuele; Lucenteforte, Ersilia; Miele, Alba; Oddone, Francesco; Crescioli, Giada; Fameli, Valeria; Korevaar, Daniël A; Virgili, Gianni
2017-01-01
Research has shown a modest adherence of diagnostic test accuracy (DTA) studies in glaucoma to the Standards for Reporting of Diagnostic Accuracy Studies (STARD). We have applied the updated 30-item STARD 2015 checklist to a set of studies included in a Cochrane DTA systematic review of imaging tools for diagnosing manifest glaucoma. Three pairs of reviewers, including one senior reviewer who assessed all studies, independently checked the adherence of each study to STARD 2015. Adherence was analyzed on an individual-item basis. Logistic regression was used to evaluate the effect of publication year and impact factor on adherence. We included 106 DTA studies, published between 2003-2014 in journals with a median impact factor of 2.6. Overall adherence was 54.1% for 3,286 individual rating across 31 items, with a mean of 16.8 (SD: 3.1; range 8-23) items per study. Large variability in adherence to reporting standards was detected across individual STARD 2015 items, ranging from 0 to 100%. Nine items (1: identification as diagnostic accuracy study in title/abstract; 6: eligibility criteria; 10: index test (a) and reference standard (b) definition; 12: cut-off definitions for index test (a) and reference standard (b); 14: estimation of diagnostic accuracy measures; 21a: severity spectrum of diseased; 23: cross-tabulation of the index and reference standard results) were adequately reported in more than 90% of the studies. Conversely, 10 items (3: scientific and clinical background of the index test; 11: rationale for the reference standard; 13b: blinding of index test results; 17: analyses of variability; 18; sample size calculation; 19: study flow diagram; 20: baseline characteristics of participants; 28: registration number and registry; 29: availability of study protocol; 30: sources of funding) were adequately reported in less than 30% of the studies. Only four items showed a statistically significant improvement over time: missing data (16), baseline characteristics of participants (20), estimates of diagnostic accuracy (24) and sources of funding (30). Adherence to STARD 2015 among DTA studies in glaucoma research is incomplete, and only modestly increasing over time.
Evaluation and Analysis of F-16XL Wind Tunnel Data From Static and Dynamic Tests
NASA Technical Reports Server (NTRS)
Kim, Sungwan; Murphy, Patrick C.; Klein, Vladislav
2004-01-01
A series of wind tunnel tests were conducted in the NASA Langley Research Center as part of an ongoing effort to develop and test mathematical models for aircraft rigid-body aerodynamics in nonlinear unsteady flight regimes. Analysis of measurement accuracy, especially for nonlinear dynamic systems that may exhibit complicated behaviors, is an essential component of this ongoing effort. In this report, tools for harmonic analysis of dynamic data and assessing measurement accuracy are presented. A linear aerodynamic model is assumed that is appropriate for conventional forced-oscillation experiments, although more general models can be used with these tools. Application of the tools to experimental data is demonstrated and results indicate the levels of uncertainty in output measurements that can arise from experimental setup, calibration procedures, mechanical limitations, and input errors.
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
Pobocik, Tamara
2015-01-01
This quantitative research study used a pretest/posttest design and reviewed how an educational electronic documentation system helped nursing students to identify the accurate "related to" statement of the nursing diagnosis for the patient in the case study. Students in the sample population were senior nursing students in a bachelor of science nursing program in the northeastern United States. Two distinct groups were used for a control and intervention group. The intervention group used the educational electronic documentation system for three class assignments. Both groups were given a pretest and posttest case study. The Accuracy Tool was used to score the students' responses to the related to statement of a nursing diagnosis given at the end of the case study. The scores of the Accuracy Tool were analyzed, and then the numeric scores were placed in SPSS, and the paired t test scores were analyzed for statistical significance. The intervention group's scores were statistically different from the pretest scores to posttest scores, while the control group's scores remained the same from pretest to posttest. The recommendation to nursing education is to use the educational electronic documentation system as a teaching pedagogy to help nursing students prepare for nursing practice. © 2014 NANDA International, Inc.
Lopez, Gregory; Wright, Rick; Martin, David; Jung, James; Bracey, Daniel; Gupta, Ranjan
2015-04-15
Psychomotor testing has been recently incorporated into residency training programs not only to objectively assess a surgeon's abilities but also to address current patient-safety advocacy and medicolegal trends. The purpose of this study was to develop and test a cost-effective psychomotor training and assessment tool-The Fundamentals of Orthopaedic Surgery (FORS)-for junior-level orthopaedic surgery resident education. An orthopaedic skills board was made from supplies purchased at a local hardware store with a total cost of less than $350 so as to assess six different psychomotor skills. The six skills included fracture reduction, three-dimensional drill accuracy, simulated fluoroscopy-guided drill accuracy, depth-of-plunge minimization, drill-by-feel accuracy, and suture speed and quality. Medical students, residents, and attending physicians from three orthopaedic surgery residency programs accredited by the Accreditation Council for Graduate Medical Education participated in the study. Twenty-five medical students were retained for longitudinal training and testing for four weeks. Each training session involved an initial examination followed by thirty minutes of board training. The time to perform each task was measured with accuracy measurements for the appropriate tasks. Statistical analysis was done with one-way analysis of variance, with significance set at p < 0.05. Forty-seven medical students, twenty-nine attending physicians, and fifty-eight orthopaedic surgery residents participated in the study. Stratification among medical students, junior residents, and senior residents and/or attending physicians was found in all tasks. The twenty-five medical students who were retained for longitudinal training improved significantly above junior resident level in four of the six tasks. The FORS is an effective simulator of basic motor skills that translates across a wide variety of operations and has the potential to advance junior-level participants to senior resident skill level. The FORS simulator may serve as a valuable tool for resident education. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Next-generation genotype imputation service and methods.
Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian
2016-10-01
Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.
A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.
Mung, Jay; Vignon, Francois; Jain, Ameet
2011-01-01
In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.
Effects of tools inserted through snake-like surgical manipulators.
Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran
2014-01-01
Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, J; Gong, Y; Bar-Ad, V
Purpose: Accurate contour delineation is crucial for radiotherapy. Atlas based automatic segmentation tools can be used to increase the efficiency of contour accuracy evaluation. This study aims to optimize technical parameters utilized in the tool by exploring the impact of library size and atlas number on the accuracy of cardiac contour evaluation. Methods: Patient CT DICOMs from RTOG 0617 were used for this study. Five experienced physicians delineated the cardiac structures including pericardium, atria and ventricles following an atlas guideline. The consistency of cardiac structured delineation using the atlas guideline was verified by a study with four observers and seventeenmore » patients. The CT and cardiac structure DICOM files were then used for the ABAS technique.To study the impact of library size (LS) and atlas number (AN) on automatic contour accuracy, automatic contours were generated with varied technique parameters for five randomly selected patients. Three LS (20, 60, and 100) were studied using commercially available software. The AN was four, recommended by the manufacturer. Using the manual contour as the gold standard, Dice Similarity Coefficient (DSC) was calculated between the manual and automatic contours. Five-patient averaged DSCs were calculated for comparison for each cardiac structure.In order to study the impact of AN, the LS was set 100, and AN was tested from one to five. The five-patient averaged DSCs were also calculated for each cardiac structure. Results: DSC values are highest when LS is 100 and AN is four. The DSC is 0.90±0.02 for pericardium, 0.75±0.06 for atria, and 0.86±0.02 for ventricles. Conclusion: By comparing DSC values, the combination AN=4 and LS=100 gives the best performance. This project was supported by NCI grants U24CA12014, U24CA180803, U10CA180868, U10CA180822, PA CURE grant and Bristol-Myers Squibb and Eli Lilly.« less
Image analysis software versus direct anthropometry for breast measurements.
Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako
2014-10-01
To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.
3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis
NASA Astrophysics Data System (ADS)
Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin
2018-03-01
In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.
Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien
2013-01-01
The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707
Trocky, NM; Fontinha, M
2005-01-01
Data collected throughout the course of a clinical research trial must be reviewed for accuracy and completeness continually. The Oracle Clinical® (OC) data management application utilized to capture clinical data facilitates data integrity through pre-programmed validations, edit and range checks, and discrepancy management modules. These functions were not enough. Coupled with the use of specially created reports in Oracle Discoverer® and Integrated Review TM, both ad-hoc query and reporting tools, research staff have enhanced their ability to clean, analyze and report more accurate data captured within and among Case Report Forms (eCRFs) by individual study or across multiple studies. PMID:16779428
Resident accuracy of joint line palpation using ultrasound verification.
Rho, Monica E; Chu, Samuel K; Yang, Aaron; Hameed, Farah; Lin, Cindy Yuchin; Hurh, Peter J
2014-10-01
To determine the accuracy of knee and acromioclavicular (AC) joint line palpation in Physical Medicine and Rehabilitation (PM&R) residents using ultrasound (US) verification. Cohort study. PM&R residency program at an academic institution. Twenty-four PM&R residents participating in a musculoskeletal US course (7 PGY-2, 8 PGY-3, and 9 PGY4 residents). Twenty-four PM&R residents participating in an US course were asked to palpate the AC joint and lateral joint line of the knee in a female and male model before the start of the course. Once the presumed joint line was localized, the residents were asked to tape an 18-gauge, 1.5-inch, blunt-tip needle parallel to the joint line on the overlying skin. The accuracy of needle placement over the joint line was verified using US. US verification of correct needle placement over the joint line. Overall AC joint palpation accuracy was 16.7%, and knee lateral joint line palpation accuracy was 58.3%. Based on the resident level of education, using a value of P < .05, there were no statistically significant differences in the accuracy of joint line palpation. Residents in this study demonstrate poor accuracy of AC joint and lateral knee joint line identification by palpation, using US as the criterion standard for verification. There were no statistically significant differences in the accuracy rates of joint line palpation based on resident level of education. US may be a useful tool to use to advance the current methods of teaching the physical examination in medical education. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Buczinski, Sébastien; L Ollivett, Terri; Dendukuri, Nandini
2015-05-01
There is currently no gold standard method for the diagnosis of bovine respiratory disease (BRD) complex in Holstein pre-weaned dairy calves. Systematic thoracic ultrasonography (TUS) has been used as a proxy for BRD, but cannot be directly used by producers. The Wisconsin calf respiratory scoring chart (CRSC) is a simpler alternative, but with unknown accuracy. Our objective was to estimate the accuracy of CRSC, while adjusting for the lack of a gold standard. Two cross sectional study populations with a high BRD prevalence (n=106 pre-weaned Holstein calves) and an average BRD prevalence (n=85 pre-weaned Holstein calves) from North America were studied. All calves were simultaneously assessed using CRSC (cutoff used ≥ 5) and TUS (cutoff used ≥ 1cm of lung consolidation). Bayesian latent class models allowing for conditional dependence were used with informative priors for BRD prevalence and TUS accuracy (sensitivity (Se) and specificity (Sp)) and non-informative priors for CRSC accuracies. Robustness of the model was tested by relaxing priors for prevalence or TUS accuracy. The SeCRSC (95% credible interval (CI)) and SpCRSC were 62.4% (47.9-75.8) and 74.1% (64.9-82.8) respectively. The SeTUS was 79.4% (66.4-90.9) and SpTUS was 93.9% (88.0-97.6). The imperfect accuracy of CRSC and TUS should be taken into account when using those tools to assess BRD status. Copyright © 2015 Elsevier B.V. All rights reserved.
Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C
2015-03-01
Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use.
Diagnostic validation of three test methods for detection of cyprinid herpesvirus 3 (CyHV-3).
Clouthier, Sharon C; McClure, Carol; Schroeder, Tamara; Desai, Megan; Hawley, Laura; Khatkar, Sunita; Lindsay, Melissa; Lowe, Geoff; Richard, Jon; Anderson, Eric D
2017-03-06
Cyprinid herpesvirus 3 (CyHV-3) is the aetiological agent of koi herpesvirus disease in koi and common carp. The disease is notifiable to the World Organisation for Animal Health. Three tests-quantitative polymerase chain reaction (qPCR), conventional PCR (cPCR) and virus isolation by cell culture (VI)-were validated to assess their fitness as diagnostic tools for detection of CyHV-3. Test performance metrics of diagnostic accuracy were sensitivity (DSe) and specificity (DSp). Repeatability and reproducibility were measured to assess diagnostic precision. Estimates of test accuracy, in the absence of a gold standard reference test, were generated using latent class models. Test samples originated from wild common carp naturally exposed to CyHV-3 or domesticated koi either virus free or experimentally infected with the virus. Three laboratories in Canada participated in the precision study. Moderate to high repeatability (81 to 99%) and reproducibility (72 to 97%) were observed for the qPCR and cPCR tests. The lack of agreement observed between some of the PCR test pair results was attributed to cross-contamination of samples with CyHV-3 nucleic acid. Accuracy estimates for the PCR tests were 99% for DSe and 93% for DSp. Poor precision was observed for the VI test (4 to 95%). Accuracy estimates for VI/qPCR were 90% for DSe and 88% for DSp. Collectively, the results show that the CyHV-3 qPCR test is a suitable tool for surveillance, presumptive diagnosis and certification of individuals or populations as CyHV-3 free.
Wilbanks, Bryan A; Moss, Jacqueline A; Berner, Eta S
2013-08-01
Anesthesia information management systems must often be tailored to fit the environment in which they are implemented. Extensive customization necessitates that systems be analyzed for both accuracy and completeness of documentation design to ensure that the final record is a true representation of practice. The purpose of this study was to determine the accuracy of a recently installed system in the capture of key perianesthesia data. This study used an observational design and was conducted using a convenience sample of nurse anesthetists. Observational data of the nurse anesthetists'delivery of anesthesia care were collected using a touch-screen tablet computer utilizing an Access database customized observational data collection tool. A questionnaire was also administered to these nurse anesthetists to assess perceived accuracy, completeness, and satisfaction with the electronic documentation system. The major sources of data not documented in the system were anesthesiologist presence (20%) and placement of intravenous lines (20%). The major sources of inaccuracies in documentation were gas flow rates (45%), medication administration times (30%), and documentation of neuromuscular function testing (20%)-all of the sources of inaccuracies were related to the use of charting templates that were not altered to reflect the actual interventions performed.
Method for machining steel with diamond tools
Casstevens, J.M.
1984-01-01
The present invention is directed to a method for machine optical quality finishes and contour accuracies of workpieces of carbon-containing metals such as steel with diamond tooling. The wear rate of the diamond tooling is significantly reduced by saturating the atmosphere at the interface of the workpiece and the diamond tool with a gaseous hydrocarbon during the machining operation. The presence of the gaseous hydrocarbon effectively eliminates the deterioration of the diamond tool by inhibiting or preventing the conversion of the diamond carbon to graphite carbon at the point of contact between the cutting tool and the workpiece.
Method for machining steel with diamond tools
Casstevens, John M.
1986-01-01
The present invention is directed to a method for machining optical quality inishes and contour accuracies of workpieces of carbon-containing metals such as steel with diamond tooling. The wear rate of the diamond tooling is significantly reduced by saturating the atmosphere at the interface of the workpiece and the diamond tool with a gaseous hydrocarbon during the machining operation. The presence of the gaseous hydrocarbon effectively eliminates the deterioration of the diamond tool by inhibiting or preventing the conversion of the diamond carbon to graphite carbon at the point of contact between the cutting tool and the workpiece.
RNA-SSPT: RNA Secondary Structure Prediction Tools.
Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; Din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad
2013-01-01
The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes.
RNA-SSPT: RNA Secondary Structure Prediction Tools
Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad
2013-01-01
The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes. PMID:24250115
Willemet, Marie; Vennin, Samuel; Alastruey, Jordi
2016-12-08
Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C.
2016-12-01
Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.
Assessment of Near-Field Sonic Boom Simulation Tools
NASA Technical Reports Server (NTRS)
Casper, J. H.; Cliff, S. E.; Thomas, S. D.; Park, M. A.; McMullen, M. S.; Melton, J. E.; Durston, D. A.
2008-01-01
A recent study for the Supersonics Project, within the National Aeronautics and Space Administration, has been conducted to assess current in-house capabilities for the prediction of near-field sonic boom. Such capabilities are required to simulate the highly nonlinear flow near an aircraft, wherein a sonic-boom signature is generated. There are many available computational fluid dynamics codes that could be used to provide the near-field flow for a sonic boom calculation. However, such codes have typically been developed for applications involving aerodynamic configuration, for which an efficiently generated computational mesh is usually not optimum for a sonic boom prediction. Preliminary guidelines are suggested to characterize a state-of-the-art sonic boom prediction methodology. The available simulation tools that are best suited to incorporate into that methodology are identified; preliminary test cases are presented in support of the selection. During this phase of process definition and tool selection, parallel research was conducted in an attempt to establish criteria that link the properties of a computational mesh to the accuracy of a sonic boom prediction. Such properties include sufficient grid density near shocks and within the zone of influence, which are achieved by adaptation and mesh refinement strategies. Prediction accuracy is validated by comparison with wind tunnel data.
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases.
Aspinall, W P; Cooke, R M; Havelaar, A H; Hoffmann, S; Hald, T
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert's statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts' accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels.
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases
Aspinall, W. P.; Cooke, R. M.; Havelaar, A. H.; Hoffmann, S.; Hald, T.
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert’s statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts’ accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels. PMID:26930595
Rakshasbhuvankar, Abhijeet; Rao, Shripada; Palumbo, Linda; Ghosh, Soumya; Nagarajan, Lakshmi
2017-08-01
This diagnostic accuracy study compared the accuracy of seizure detection by amplitude-integrated electroencephalography with the criterion standard conventional video EEG in term and near-term infants at risk of seizures. Simultaneous recording of amplitude-integrated EEG (2-channel amplitude-integrated EEG with raw trace) and video EEG was done for 24 hours for each infant. Amplitude-integrated EEG was interpreted by a neonatologist; video EEG was interpreted by a neurologist independently. Thirty-five infants were included in the analysis. In the 7 infants with seizures on video EEG, there were 169 seizure episodes on video EEG, of which only 57 were identified by amplitude-integrated EEG. Amplitude-integrated EEG had a sensitivity of 33.7% for individual seizure detection. Amplitude-integrated EEG had an 86% sensitivity for detection of babies with seizures; however, it was nonspecific, in that 50% of infants with seizures detected by amplitude-integrated EEG did not have true seizures by video EEG. In conclusion, our study suggests that amplitude-integrated EEG is a poor screening tool for neonatal seizures.
Perez-Cruz, Pedro E; Dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-11-01
Survival prognostication is important during the end of life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. The aims of the study were to examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at Day -14 (baseline) with accuracy at each time point using a test of proportions. A total of 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 days (4-20 days). Temporal CPS had low accuracy (10%-40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (P < .05 at each time point) but decreased close to death. Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources
NASA Astrophysics Data System (ADS)
Novakovskaia, E.; Hayes, C.; Collier, C.
2014-12-01
The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.
Accuracy of 24- and 48-Hour Forecasts of Haines' Index
Brian E. Potter; Jonathan E. Martin
2001-01-01
The University of Wisconsin-Madison produces Web-accessible, 24- and 48-hour forecasts of the Haines Index (a tool used to measure the atmospheric potential for large wildfire development) for most of North America using its nonhydrostatic modeling system. The authors examined the accuracy of these forecasts using data from 1999 and 2000. Measures used include root-...
Chapter 13 - Perspectives on LANDFIRE Prototype Project Accuracy Assessment
James Vogelmann; Zhiliang Zhu; Jay Kost; Brian Tolk; Donald Ohlen
2006-01-01
The purpose of this chapter is to provide a general overview of the many aspects of accuracy assessment pertinent to the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project). The LANDFIRE Prototype formed a large and complex research and development project with many broad-scale data sets and products developed throughout...
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Balog, Julia; Perenyi, Dora; Guallar-Hoyas, Cristina; Egri, Attila; Pringle, Steven D; Stead, Sara; Chevallier, Olivier P; Elliott, Chris T; Takats, Zoltan
2016-06-15
Increasingly abundant food fraud cases have brought food authenticity and safety into major focus. This study presents a fast and effective way to identify meat products using rapid evaporative ionization mass spectrometry (REIMS). The experimental setup was demonstrated to be able to record a mass spectrometric profile of meat specimens in a time frame of <5 s. A multivariate statistical algorithm was developed and successfully tested for the identification of animal tissue with different anatomical origin, breed, and species with 100% accuracy at species and 97% accuracy at breed level. Detection of the presence of meat originating from a different species (horse, cattle, and venison) has also been demonstrated with high accuracy using mixed patties with a 5% detection limit. REIMS technology was found to be a promising tool in food safety applications providing a reliable and simple method for the rapid characterization of food products.
ERIC Educational Resources Information Center
de Bruin, Anique B. H.; Kok, Ellen M.; Lobbestael, Jill; de Grip, Andries
2017-01-01
Being overconfident when estimating scores for an upcoming exam is a widespread phenomenon in higher education and presents threats to self-regulated learning and academic performance. The present study sought to investigate how overconfidence and poor monitoring accuracy vary over the length of a college course, and how an intervention consisting…
Analysis of Nature of Science Included in Recent Popular Writing Using Text Mining Techniques
ERIC Educational Resources Information Center
Jiang, Feng; McComas, William F.
2014-01-01
This study examined the inclusion of nature of science (NOS) in popular science writing to determine whether it could serve supplementary resource for teaching NOS and to evaluate the accuracy of text mining and classification as a viable research tool in science education research. Four groups of documents published from 2001 to 2010 were…
Careers (A Course of Study). Unit IV: Applying for the Job.
ERIC Educational Resources Information Center
Turley, Kay
Designed to enable special needs students to write resumes and complete application forms with employable accuracy, this set of activities on applying for a job is the fourth unit in a nine-unit secondary level careers course intended to provide handicapped students with the knowledge and tools necessary to succeed in the world of work. Chapter 1…
ERIC Educational Resources Information Center
Cavalli, Eddy; Colé, Pascale; Leloup, Gilles; Poracchia-George, Florence; Sprenger-Charolles, Liliane; El Ahmadi, Abdessadek
2018-01-01
Developmental dyslexia is a lifelong impairment affecting 5% to 10% of the population. In French-speaking countries, although a number of standardized tests for dyslexia in children are available, tools suitable to screen for dyslexia in adults are lacking. In this study, we administered the "Alouette" reading test to a normative sample…
ERIC Educational Resources Information Center
Armon-Lotem, Sharon; Meir, Natalia
2016-01-01
Background: Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have…
Neural Networks Based Approach to Enhance Space Hardware Reliability
NASA Technical Reports Server (NTRS)
Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.
2011-01-01
This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.
NASA Astrophysics Data System (ADS)
Baltussen, Elisabeth J. M.; Snaebjornsson, Petur; de Koning, Susan G. Brouwer; Sterenborg, Henricus J. C. M.; Aalbers, Arend G. J.; Kok, Niels; Beets, Geerard L.; Hendriks, Benno H. W.; Kuhlmann, Koert F. D.; Ruers, Theo J. M.
2017-10-01
Colorectal surgery is the standard treatment for patients with colorectal cancer. To overcome two of the main challenges, the circumferential resection margin and postoperative complications, real-time tissue assessment could be of great benefit during surgery. In this ex vivo study, diffuse reflectance spectroscopy (DRS) was used to differentiate tumor tissue from healthy surrounding tissues in patients with colorectal neoplasia. DRS spectra were obtained from tumor tissue, healthy colon, or rectal wall and fat tissue, for every patient. Data were randomly divided into training (80%) and test (20%) sets. After spectral band selection, the spectra were classified using a quadratic classifier and a linear support vector machine. Of the 38 included patients, 36 had colorectal cancer and 2 had an adenoma. When the classifiers were applied to the test set, colorectal cancer could be discriminated from healthy tissue with an overall accuracy of 0.95 (±0.03). This study demonstrates the possibility to separate colorectal cancer from healthy surrounding tissue by applying DRS. High classification accuracies were obtained both in homogeneous and inhomogeneous tissues. This is a fundamental step toward the development of a tool for real-time in vivo tissue assessment during colorectal surgery.
NASA Astrophysics Data System (ADS)
Okokpujie, Imhade Princess; Ikumapayi, Omolayo M.; Okonkwo, Ugochukwu C.; Salawu, Enesi Y.; Afolalu, Sunday A.; Dirisu, Joseph O.; Nwoke, Obinna N.; Ajayi, Oluseyi O.
2017-12-01
In recent machining operation, tool life is one of the most demanding tasks in production process, especially in the automotive industry. The aim of this paper is to study tool wear on HSS in end milling of aluminium 6061 alloy. The experiments were carried out to investigate tool wear with the machined parameters and to developed mathematical model using response surface methodology. The various machining parameters selected for the experiment are spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r). The experiment was designed using central composite design (CCD) in which 31 samples were run on SIEG 3/10/0010 CNC end milling machine. After each experiment the cutting tool was measured using scanning electron microscope (SEM). The obtained optimum machining parameter combination are spindle speed of 2500 rpm, feed rate of 200 mm/min, axial depth of cut of 20 mm, and radial depth of cut 1.0mm was found out to achieved the minimum tool wear as 0.213 mm. The mathematical model developed predicted the tool wear with 99.7% which is within the acceptable accuracy range for tool wear prediction.
Jaiswara, Ranjana; Nandi, Diptarup; Balakrishnan, Rohini
2013-01-01
Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.
Transnasal endoscopy: no gagging no panic!
Parker, Clare; Alexandridis, Estratios; Plevris, John; O'Hara, James; Panter, Simon
2016-01-01
Background Transnasal endoscopy (TNE) is performed with an ultrathin scope via the nasal passages and is increasingly used. This review covers the technical characteristics, tolerability, safety and acceptability of TNE and also diagnostic accuracy, use as a screening tool and therapeutic applications. It includes practical advice from an ear, nose, throat (ENT) specialist to optimise TNE practice, identify ENT pathology and manage complications. Methods A Medline search was performed using the terms “transnasal”, “ultrathin”, “small calibre”, “endoscopy”, “EGD” to identify relevant literature. Results There is increasing evidence that TNE is better tolerated than standard endoscopy as measured using visual analogue scales, and the main area of discomfort is nasal during insertion of the TN endoscope, which seems remediable with adequate topical anaesthesia. The diagnostic yield has been found to be similar for detection of Barrett's oesophagus, gastric cancer and GORD-associated diseases. There are some potential issues regarding the accuracy of TNE in detecting small early gastric malignant lesions, especially those in the proximal stomach. TNE is feasible and safe in a primary care population and is ideal for screening for upper gastrointestinal pathology. It has an advantage as a diagnostic tool in the elderly and those with multiple comorbidities due to fewer adverse effects on the cardiovascular system. It has significant advantages for therapeutic procedures, especially negotiating upper oesophageal strictures and insertion of nasoenteric feeding tubes. Conclusions TNE is well tolerated and a valuable diagnostic tool. Further evidence is required to establish its accuracy for the diagnosis of early and small gastric malignancies. There is an emerging role for TNE in therapeutic endoscopy, which needs further study. PMID:28839865
NASA Astrophysics Data System (ADS)
Abass, K. I.
2016-11-01
Single Point Incremental Forming process (SPIF) is a forming technique of sheet material based on layered manufacturing principles. The edges of sheet material are clamped while the forming tool is moved along the tool path. The CNC milling machine is used to manufacturing the product. SPIF involves extensive plastic deformation and the description of the process is more complicated by highly nonlinear boundary conditions, namely contact and frictional effects have been accomplished. However, due to the complex nature of these models, numerical approaches dominated by Finite Element Analysis (FEA) are now in widespread use. The paper presents the data and main results of a study on effect of using preforming blank in SPIF through FEA. The considered SPIF has been studied under certain process conditions referring to the test work piece, tool, etc., applying ANSYS 11. The results show that the simulation model can predict an ideal profile of processing track, the behaviour of contact tool-workpiece, the product accuracy by evaluation its thickness, surface strain and the stress distribution along the deformed blank section during the deformation stages.
Information Quality Challenges of Patient-Generated Data in Clinical Practice
West, Peter; Van Kleek, Max; Giordano, Richard; Weal, Mark; Shadbolt, Nigel
2017-01-01
A characteristic trend of digital health has been the dramatic increase in patient-generated data being presented to clinicians, which follows from the increased ubiquity of self-tracking practices by individuals, driven, in turn, by the proliferation of self-tracking tools and technologies. Such tools not only make self-tracking easier but also potentially more reliable by automating data collection, curation, and storage. While self-tracking practices themselves have been studied extensively in human–computer interaction literature, little work has yet looked at whether these patient-generated data might be able to support clinical processes, such as providing evidence for diagnoses, treatment monitoring, or postprocedure recovery, and how we can define information quality with respect to self-tracked data. In this article, we present the results of a literature review of empirical studies of self-tracking tools, in which we identify how clinicians perceive quality of information from such tools. In the studies, clinicians perceive several characteristics of information quality relating to accuracy and reliability, completeness, context, patient motivation, and representation. We discuss the issues these present in admitting self-tracked data as evidence for clinical decisions. PMID:29209601
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Clarke, Callisia N; Patel, Sameer H; Day, Ryan W; George, Sobha; Sweeney, Colin; Monetes De Oca, Georgina Avaloa; Aiss, Mohamed Ait; Grubbs, Elizabeth G; Bednarski, Brian K; Lee, Jeffery E; Bodurka, Diane C; Skibber, John M; Aloia, Thomas A
2017-03-01
Duty-hour regulations have increased the frequency of trainee-trainee patient handoffs. Each handoff creates a potential source for communication errors that can lead to near-miss and patient-harm events. We investigated the utility, efficacy, and trainee experience associated with implementation of a novel, standardized, electronic handoff system. We conducted a prospective intervention study of trainee-trainee handoffs of inpatients undergoing complex general surgical oncology procedures at a large tertiary institution. Preimplementation data were measured using trainee surveys and direct observation and by tracking delinquencies in charting. A standardized electronic handoff tool was created in a research electronic data capture (REDCap) database using the previously validated I-PASS methodology (illness severity, patient summary, action list, situational awareness and contingency planning, and synthesis). Electronic handoff was augmented by direct communication via phone or face-to-face interaction for inpatients deemed "watcher" or "unstable." Postimplementation handoff compliance, communication errors, and trainee work flow were measured and compared to preimplementation values using standard statistical analysis. A total of 474 handoffs (203 preintervention and 271 postintervention) were observed over the study period; 86 handoffs involved patients admitted to the surgical intensive care unit, 344 patients admitted to the surgical stepdown unit, and 44 patients on the surgery ward. Implementation of the structured electronic tool resulted in an increase in trainee handoff compliance from 73% to 96% (P < .001) and decreased errors in communication by 50% (P = .044) while improving trainee efficiency and workflow. A standardized electronic tool augmented by direct communication for higher acuity patients can improve compliance, accuracy, and efficiency of handoff communication between surgery trainees. Copyright © 2016 Elsevier Inc. All rights reserved.
Fazel, Seena; Singh, Jay P; Doll, Helen; Grann, Martin
2012-07-24
To investigate the predictive validity of tools commonly used to assess the risk of violence, sexual, and criminal behaviour. Systematic review and tabular meta-analysis of replication studies following PRISMA guidelines. PsycINFO, Embase, Medline, and United States Criminal Justice Reference Service Abstracts. We included replication studies from 1 January 1995 to 1 January 2011 if they provided contingency data for the offending outcome that the tools were designed to predict. We calculated the diagnostic odds ratio, sensitivity, specificity, area under the curve, positive predictive value, negative predictive value, the number needed to detain to prevent one offence, as well as a novel performance indicator-the number safely discharged. We investigated potential sources of heterogeneity using metaregression and subgroup analyses. Risk assessments were conducted on 73 samples comprising 24,847 participants from 13 countries, of whom 5879 (23.7%) offended over an average of 49.6 months. When used to predict violent offending, risk assessment tools produced low to moderate positive predictive values (median 41%, interquartile range 27-60%) and higher negative predictive values (91%, 81-95%), and a corresponding median number needed to detain of 2 (2-4) and number safely discharged of 10 (4-18). Instruments designed to predict violent offending performed better than those aimed at predicting sexual or general crime. Although risk assessment tools are widely used in clinical and criminal justice settings, their predictive accuracy varies depending on how they are used. They seem to identify low risk individuals with high levels of accuracy, but their use as sole determinants of detention, sentencing, and release is not supported by the current evidence. Further research is needed to examine their contribution to treatment and management.
Diagnostic accuracy of physical examination for anterior knee instability: a systematic review.
Leblanc, Marie-Claude; Kowalczuk, Marcin; Andruszkiewicz, Nicole; Simunovic, Nicole; Farrokhyar, Forough; Turnbull, Travis Lee; Debski, Richard E; Ayeni, Olufemi R
2015-10-01
Determining diagnostic accuracy of Lachman, pivot shift and anterior drawer tests versus gold standard diagnosis (magnetic resonance imaging or arthroscopy) for anterior cruciate ligament (ACL) insufficiency cases. Secondarily, evaluating effects of: chronicity, partial rupture, awake versus anaesthetized evaluation. Searching MEDLINE, EMBASE and PubMed identified studies on diagnostic accuracy for ACL insufficiency. Studies identification and data extraction were performed in duplicate. Quality assessment used QUADAS tool, and statistical analyses were completed for pooled sensitivity and specificity. Eight studies were included. Given insufficient data, pooled analysis was only possible for sensitivity on Lachman and pivot shift test. During awake evaluation, sensitivity for the Lachman test was 89 % (95 % CI 0.76, 0.98) for all rupture types, 96 % (95 % CI 0.90, 1.00) for complete ruptures and 68 % (95 % CI 0.25, 0.98) for partial ruptures. For pivot shift in awake evaluation, results were 79 % (95 % CI 0.63, 0.91) for all rupture types, 86 % (95 % CI 0.68, 0.99) for complete ruptures and 67 % (95 % CI 0.47, 0.83) for partial ruptures. Decreased sensitivity of Lachman and pivot shift tests for partial rupture cases and for awake patients raised suspicions regarding the accuracy of these tests for diagnosis of ACL insufficiency. This may lead to further research aiming to improve the understanding of the true accuracy of these physical diagnostic tests and increase the reliability of clinical investigation for this pathology. IV.
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
FMRI Is a Valid Noninvasive Alternative to Wada Testing
Binder, Jeffrey R.
2010-01-01
Partial removal of the anterior temporal lobe (ATL) is a highly effective surgical treatment for intractable temporal lobe epilepsy, yet roughly half of patients who undergo left ATL resection show decline in language or verbal memory function postoperatively. Two recent studies demonstrate that preoperative fMRI can predict postoperative naming and verbal memory changes in such patients. Most importantly, fMRI significantly improves the accuracy of prediction relative to other noninvasive measures used alone. Addition of language and memory lateralization data from the intracarotid amobarbital (Wada) test did not improve prediction accuracy in these studies. Thus, fMRI provides patients and practitioners with a safe, non-invasive, and well-validated tool for making better-informed decisions regarding elective surgery based on a quantitative assessment of cognitive risk. PMID:20850386
Tool and Method for Testing the Resistance of the Snow Road Cover to Destruction
NASA Astrophysics Data System (ADS)
Zhelykevich, R.; Lysyannikov, A.; Kaiser, Yu; Serebrenikova, Yu; Lysyannikova, N.; Shram, V.; Kravtsova, Ye; Plakhotnikova, M.
2016-06-01
The paper presents the design of the tool for efficient determination of the hardness of the snow road coating. The tool increases vertical positioning of the rod with the tip through replacement of the rod slide friction of the ball element by roll friction of its outer bearing race in order to enhance the accuracy of determining the hardness of the snow-ice road covering. A special feature of the tool consists in possibility of creating different impact energy by the change of the lifting height of the rod with the tip (indenter) and the exchangeable load mass. This allows the study of the influence of the tip shape and the impact energy on the snow strength parameters in a wide range, extends the scope of application of the durometer and makes possible to determine the strength of snow-ice formations by indenters with various geometrical parameters depending on climatic conditions.
Diagnostic accuracy of physical examination tests of the ankle/foot complex: a systematic review.
Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren; Cook, Chad
2013-08-01
Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. 3a.
DIAGNOSTIC ACCURACY OF PHYSICAL EXAMINATION TESTS OF THE ANKLE/FOOT COMPLEX: A SYSTEMATIC REVIEW
Schwieterman, Braun; Haas, Deniele; Columber, Kirby; Knupp, Darren
2013-01-01
Background: Orthopedic special tests of the ankle/foot complex are routinely used during the physical examination process in order to help diagnose ankle/lower leg pathologies. Purpose: The purpose of this systematic review was to investigate the diagnostic accuracy of ankle/lower leg special tests. Methods: A search of the current literature was conducted using PubMed, CINAHL, SPORTDiscus, ProQuest Nursing and Allied Health Sources, Scopus, and Cochrane Library. Studies were eligible if they included the following: 1) a diagnostic clinical test of musculoskeletal pathology in the ankle/foot complex, 2) description of the clinical test or tests, 3) a report of the diagnostic accuracy of the clinical test (e.g. sensitivity and specificity), and 4) an acceptable reference standard for comparison. The quality of included studies was determined by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. Results: Nine diagnostic accuracy studies met the inclusion criteria for this systematic review; analyzing a total of 16 special tests of the ankle/foot complex. After assessment using the QUADAS-2, only one study had low risk of bias and low concerns regarding applicability. Conclusion: Most ankle/lower leg orthopedic special tests are confirmatory in nature and are best utilized at the end of the physical examination. Most of the studies included in this systematic review demonstrate notable biases, which suggest that results and recommendations in this review should be taken as a guide rather than an outright standard. There is need for future research with more stringent study design criteria so that more accurate diagnostic power of ankle/lower leg special tests can be determined. Level of Evidence: 3a PMID:24175128
A tool for developing an automatic insect identification system based on wing outlines
Yang, He-Ping; Ma, Chun-Sen; Wen, Hui; Zhan, Qing-Bin; Wang, Xin-Li
2015-01-01
For some insect groups, wing outline is an important character for species identification. We have constructed a program as the integral part of an automated system to identify insects based on wing outlines (DAIIS). This program includes two main functions: (1) outline digitization and Elliptic Fourier transformation and (2) classifier model training by pattern recognition of support vector machines and model validation. To demonstrate the utility of this program, a sample of 120 owlflies (Neuroptera: Ascalaphidae) was split into training and validation sets. After training, the sample was sorted into seven species using this tool. In five repeated experiments, the mean accuracy for identification of each species ranged from 90% to 98%. The accuracy increased to 99% when the samples were first divided into two groups based on features of their compound eyes. DAIIS can therefore be a useful tool for developing a system of automated insect identification. PMID:26251292
Cristache, Corina Marilena; Gurbanescu, Silviu
2017-01-01
of this study was to evaluate the accuracy of a stereolithographic template, with sleeve structure incorporated into the design, for computer-guided dental implant insertion in partially edentulous patients. Sixty-five implants were placed in twenty-five consecutive patients with a stereolithographic surgical template. After surgery, digital impression was taken and 3D inaccuracy of implants position at entry point, apex, and angle deviation was measured using an inspection tool software. Mann-Whitney U test was used to compare accuracy between maxillary and mandibular surgical guides. A p value < .05 was considered significant. Mean (and standard deviation) of 3D error at the entry point was 0.798 mm (±0.52), at the implant apex it was 1.17 mm (±0.63), and mean angular deviation was 2.34 (±0.85). A statistically significant reduced 3D error was observed at entry point p = .037, at implant apex p = .008, and also in angular deviation p = .030 in mandible when comparing to maxilla. The surgical template used has proved high accuracy for implant insertion. Within the limitations of the present study, the protocol for comparing a digital file (treatment plan) with postinsertion digital impression may be considered a useful procedure for assessing surgical template accuracy, avoiding radiation exposure, during postoperative CBCT scanning.
Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.
Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F
2013-10-01
A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).
NASA Astrophysics Data System (ADS)
Muda, I.; Dharsuky, A.; Siregar, H. S.; Sadalia, I.
2017-03-01
This study examines the pattern of readiness dimensional accuracy of financial statements of local government in North Sumatra with a routine pattern of two (2) months after the fiscal year ends and patterns of at least 3 (three) months after the fiscal year ends. This type of research is explanatory survey with quantitative methods. The population and the sample used is of local government officials serving local government financial reports. Combined Analysis And Cross-Loadings Loadings are used with statistical tools WarpPLS. The results showed that there was a pattern that varies above dimensional accuracy of the financial statements of local government in North Sumatra.
Wetland Assessment Using Unmanned Aerial Vehicle (uav) Photogrammetry
NASA Astrophysics Data System (ADS)
Boon, M. A.; Greenfield, R.; Tesfamichael, S.
2016-06-01
The use of Unmanned Arial Vehicle (UAV) photogrammetry is a valuable tool to enhance our understanding of wetlands. Accurate planning derived from this technological advancement allows for more effective management and conservation of wetland areas. This paper presents results of a study that aimed at investigating the use of UAV photogrammetry as a tool to enhance the assessment of wetland ecosystems. The UAV images were collected during a single flight within 2½ hours over a 100 ha area at the Kameelzynkraal farm, Gauteng Province, South Africa. An AKS Y-6 MKII multi-rotor UAV and a digital camera on a motion compensated gimbal mount were utilised for the survey. Twenty ground control points (GCPs) were surveyed using a Trimble GPS to achieve geometrical precision and georeferencing accuracy. Structure-from-Motion (SfM) computer vision techniques were used to derive ultra-high resolution point clouds, orthophotos and 3D models from the multi-view photos. The geometric accuracy of the data based on the 20 GCP's were 0.018 m for the overall, 0.0025 m for the vertical root mean squared error (RMSE) and an over all root mean square reprojection error of 0.18 pixel. The UAV products were then edited and subsequently analysed, interpreted and key attributes extracted using a selection of tools/ software applications to enhance the wetland assessment. The results exceeded our expectations and provided a valuable and accurate enhancement to the wetland delineation, classification and health assessment which even with detailed field studies would have been difficult to achieve.
Interactive visualisation for interpreting diagnostic test accuracy study results.
Fanshawe, Thomas R; Power, Michael; Graziadio, Sara; Ordóñez-Mena, José M; Simpson, John; Allen, Joy
2018-02-01
Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared with decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting and for users of test evaluations to facilitate interpretation and application of the results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Risk of bias reporting in the recent animal focal cerebral ischaemia literature.
Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily
2017-10-15
Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).
Zhang, Zhongheng; Hong, Yucai; Liu, Ning; Chen, Yuhao
2017-06-30
We aimed to investigate the diagnostic accuracy of contrast-enhanced ultrasound (CEUS) in evaluating blunt abdominal trauma for patients presenting to the emergency department. Electronic search of Scopus and Pubmed was performed from inception to September 2016. Human studies investigating the diagnostic accuracy of CEUS in identifying abdominal solid organ injuries were included. Risk of bias was assessed using the QUADAS tool. A total of 10 studies were included in the study and 9 of them were included for meta-analysis. The log(DOR) values ranged from 3.80 (95% CI: 2.81-4.79) to 8.52 (95% CI: 4.58-12.47) in component studies. The combined log(DOR) was 6.56 (95% CI: 5.66-7.45). The Cochran's Q was 11.265 (p = 0.793 with 16 degrees of freedom), and the Higgins' I 2 was 0%. The CEUS had a sensitivity of 0.981 (95% CI: 0.868-0.950) and a false positive rate of 0.018 (95% CI: 0.010-0.032) for identifying parenchymal injuries, with an AUC of 0.984. CEUS performed at emergency department had good diagnostic accuracy in identifying abdominal solid organ injuries. CEUS can be recommended in monitoring solid organ injuries, especially for patients managed with non-operative strategy.
Semi-supervised classification tool for DubaiSat-2 multispectral imagery
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed
2015-10-01
This paper addresses a semi-supervised classification tool based on a pixel-based approach of the multi-spectral satellite imagery. There are not many studies demonstrating such algorithm for the multispectral images, especially when the image consists of 4 bands (Red, Green, Blue and Near Infrared) as in DubaiSat-2 satellite images. The proposed approach utilizes both unsupervised and supervised classification schemes sequentially to identify four classes in the image, namely, water bodies, vegetation, land (developed and undeveloped areas) and paved areas (i.e. roads). The unsupervised classification concept is applied to identify two classes; water bodies and vegetation, based on a well-known index that uses the distinct wavelengths of visible and near-infrared sunlight that is absorbed and reflected by the plants to identify the classes; this index parameter is called "Normalized Difference Vegetation Index (NDVI)". Afterward, the supervised classification is performed by selecting training homogenous samples for roads and land areas. Here, a precise selection of training samples plays a vital role in the classification accuracy. Post classification is finally performed to enhance the classification accuracy, where the classified image is sieved, clumped and filtered before producing final output. Overall, the supervised classification approach produced higher accuracy than the unsupervised method. This paper shows some current preliminary research results which point out the effectiveness of the proposed technique in a virtual perspective.
A novel algorithm for detecting active propulsion in wheelchair users following spinal cord injury.
Popp, Werner L; Brogioli, Michael; Leuenberger, Kaspar; Albisser, Urs; Frotzler, Angela; Curt, Armin; Gassert, Roger; Starkey, Michelle L
2016-03-01
Physical activity in wheelchair-bound individuals can be assessed by monitoring their mobility as this is one of the most intense upper extremity activities they perform. Current accelerometer-based approaches for describing wheelchair mobility do not distinguish between self- and attendant-propulsion and hence may overestimate total physical activity. The aim of this study was to develop and validate an inertial measurement unit based algorithm to monitor wheel kinematics and the type of wheelchair propulsion (self- or attendant-) within a "real-world" situation. Different sensor set-ups were investigated, ranging from a high precision set-up including four sensor modules with a relatively short measurement duration of 24 h, to a less precise set-up with only one module attached at the wheel exceeding one week of measurement because the gyroscope of the sensor was turned off. The "high-precision" algorithm distinguished self- and attendant-propulsion with accuracy greater than 93% whilst the long-term measurement set-up showed an accuracy of 82%. The estimation accuracy of kinematic parameters was greater than 97% for both set-ups. The possibility of having different sensor set-ups allows the use of the inertial measurement units as high precision tools for researchers as well as unobtrusive and simple tools for manual wheelchair users. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
Breast Cancer Detection by B7-H3-Targeted Ultrasound Molecular Imaging.
Bachawal, Sunitha V; Jensen, Kristin C; Wilson, Katheryne E; Tian, Lu; Lutz, Amelie M; Willmann, Jürgen K
2015-06-15
Ultrasound complements mammography as an imaging modality for breast cancer detection, especially in patients with dense breast tissue, but its utility is limited by low diagnostic accuracy. One emerging molecular tool to address this limitation involves contrast-enhanced ultrasound using microbubbles targeted to molecular signatures on tumor neovasculature. In this study, we illustrate how tumor vascular expression of B7-H3 (CD276), a member of the B7 family of ligands for T-cell coregulatory receptors, can be incorporated into an ultrasound method that can distinguish normal, benign, precursor, and malignant breast pathologies for diagnostic purposes. Through an IHC analysis of 248 human breast specimens, we found that vascular expression of B7-H3 was selectively and significantly higher in breast cancer tissues. B7-H3 immunostaining on blood vessels distinguished benign/precursors from malignant lesions with high diagnostic accuracy in human specimens. In a transgenic mouse model of cancer, the B7-H3-targeted ultrasound imaging signal was increased significantly in breast cancer tissues and highly correlated with ex vivo expression levels of B7-H3 on quantitative immunofluorescence. Our findings offer a preclinical proof of concept for the use of B7-H3-targeted ultrasound molecular imaging as a tool to improve the diagnostic accuracy of breast cancer detection in patients. ©2015 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Prasetyo, T.; Amar, S.; Arendra, A.; Zam Zami, M. K.
2018-01-01
This study develops an on-line detection system to predict the wear of DCMT070204 tool tip during the cutting process of the workpiece. The machine used in this research is CNC ProTurn 9000 to cut ST42 steel cylinder. The audio signal has been captured using the microphone placed in the tool post and recorded in Matlab. The signal is recorded at the sampling rate of 44.1 kHz, and the sampling size of 1024. The recorded signal is 110 data derived from the audio signal while cutting using a normal chisel and a worn chisel. And then perform signal feature extraction in the frequency domain using Fast Fourier Transform. Feature selection is done based on correlation analysis. And tool wear classification was performed using artificial neural networks with 33 input features selected. This artificial neural network is trained with back propagation method. Classification performance testing yields an accuracy of 74%.
Quantitative property-structural relation modeling on polymeric dielectric materials
NASA Astrophysics Data System (ADS)
Wu, Ke
Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix, the surface tension components of polymers are modeled using ICD. Compared to the 3D surface descriptors used in a previous study, the model with ICD has a much improved prediction accuracy and stability particularly for the polar component. In predicting the enhancement effect of grafting functional groups on the breakdown strength of PNC, a simple local charge transfer model is proposed where the electron affinity (EA) and ionization energy (IE) determines the main charge trap depth in the system. This physical model is supported by first principle computation. QSPR models for EA and IE are also built, decreasing the computation time of EA and IE for a single molecule from several hours to less than one second. Furthermore, the designs of two web-based tools are introduced. The tools represent two commonly used applications for QSPR studies: data inquiry and prediction. Making models and data public available and easy to use is particularly crucial for QSPR research. The web tools described in this work should provide a good guidance and starting point for the further development of information tools enabling more efficient cooperation between computational and experimental communities.
Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies
NASA Astrophysics Data System (ADS)
Hutchings, L. J.; Ryan, J.
2010-12-01
Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.
SU-E-J-92: CERR: New Tools to Analyze Image Registration Precision.
Apte, A; Wang, Y; Oh, J; Saleh, Z; Deasy, J
2012-06-01
To present new tools in CERR (The Computational Environment for Radiotherapy Research) to analyze image registration and other software updates/additions. CERR continues to be a key environment (cited more than 129 times to date) for numerous RT-research studies involving outcomes modeling, prototyping algorithms for segmentation, and registration, experiments with phantom dosimetry, IMRT research, etc. Image registration is one of the key technologies required in many research studies. CERR has been interfaced with popular image registration frameworks like Plastimatch and ITK. Once the images have been autoregistered, CERR provides tools to analyze the accuracy of registration using the following innovative approaches (1)Distance Discordance Histograms (DDH), described in detail in a separate paper and (2)'MirrorScope', explained as follows: for any view plane the 2-d image is broken up into a 2d grid of medium-sized squares. Each square contains a right-half, which is the reference image, and a left-half, which is the mirror flipped version of the overlay image. The user can increase or decrease the size of this grid to control the resolution of the analysis. Other updates to CERR include tools to extract image and dosimetric features programmatically and storage in a central database and tools to interface with Statistical analysis software like SPSS and Matlab Statistics toolbox. MirrorScope was compared on various examples, including 'perfect' registration examples and 'artificially translated' registrations. for 'perfect' registration, the patterns obtained within each circles are symmetric, and are easily, visually recognized as aligned. For registrations that are off, the patterns obtained in the circles located in the regions of imperfections show unsymmetrical patterns that are easily recognized. The new updates to CERR further increase its utility for RT-research. Mirrorscope is a visually intuitive method of monitoring the accuracy of image registration that improves on the visual confusion of standard methods. © 2012 American Association of Physicists in Medicine.
Effect of tropospheric models on derived precipitable water vapor over Southeast Asia
NASA Astrophysics Data System (ADS)
Rahimi, Zhoobin; Mohd Shafri, Helmi Zulhaidi; Othman, Faridah; Norman, Masayu
2017-05-01
An interesting subject in the field of GPS technology is estimating variation of precipitable water vapor (PWV). This estimation can be used as a data source to assess and monitor rapid changes in meteorological conditions. So far, numerous GPS stations are distributed across the world and the number of GPS networks is increasing. Despite these developments, a challenging aspect of estimating PWV through GPS networks is the need of tropospheric parameters such as temperature, pressure, and relative humidity (Liu et al., 2015). To estimate the tropospheric parameters, global pressure temperature (GPT) model developed by Boehm et al. (2007) is widely used in geodetic analysis for GPS observations. To improve the accuracy, Lagler et al. (2013) introduced GPT2 model by adding annual and semi-annual variation effects to GPT model. Furthermore, Boehm et al. (2015) proposed the GPT2 wet (GPT2w) model which uses water vapor pressure to improve the calculations. The global accuracy of GPT2 and GPT2w models has been evaluated by previous researches (Fund et al., 2011; Munekane and Boehm, 2010); however, investigations to assess the accuracy of global tropospheric models in tropical regions such as Southeast Asia is not sufficient. This study tests and examines the accuracy of GPT2w as one of the most recent versions of tropospheric models (Boehm et al., 2015). We developed a new regional model called Malaysian Pressure Temperature (MPT) model, and compared this model with GPT2w model. The compared results at one international GNSS service (IGS) station located in the south of Peninsula Malaysia shows that MPT model has a better performance than GPT2w model to produce PWV during monsoon season. According to the results, MPT has improved the accuracy of estimated pressure and temperature by 30% and 10%, respectively, in comparison with GPT2w model. These results indicate that MPT model can be a good alternative tool in the absence of meteorological sensors at GPS stations in Peninsula Malaysia. Therefore, for GPS-based studies, we recommend MPT model to be used as a complementary tool for the Malaysia Real-Time Kinematic Network to develop a real-time PWV monitoring system.
Wade, Ryckie G; Takwoingi, Yemisi; Wormald, Justin C R; Ridgway, John P; Tanner, Steven; Rankine, James J; Bourke, Grainne
2018-05-19
Adult brachial plexus injuries (BPI) are becoming more common. The reconstruction and prognosis of pre-ganglionic injuries (root avulsions) are different to other types of BPI injury. Preoperative magnetic resonance imaging (MRI) is being used to identify root avulsions, but the evidence from studies of its diagnostic accuracy are conflicting. Therefore, a systematic review is needed to address uncertainty about the accuracy of MRI and to guide future research. We will conduct a systematic search of electronic databases alongside reference tracking. We will include studies of adults with traumatic BPI which report the accuracy of preoperative MRI (index test) against surgical exploration of the roots of the brachial plexus (reference standard) for detecting either of the two target conditions (any root avulsion or any pseudomeningocoele as a surrogate marker of root avulsion). We will exclude case reports, articles considering bilateral injuries and studies where the number of true positives, false positives, false negatives and true negatives cannot be derived. The methodological quality of the included studies will be assessed using a tailored version of the QUADAS-2 tool. Where possible, a bivariate model will be used for meta-analysis to obtain summary sensitivities and specificities for both target conditions. We will investigate heterogeneity in the performance of MRI according to field strength and the risk of bias if data permits. This review will summarise the current diagnostic accuracy of MRI for adult BPI, identify shortcomings and gaps in the literature and so help to guide future research. PROSPERO CRD42016049702 .
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
Let's get honest about sampling.
Mobley, David L
2012-01-01
Molecular simulations see widespread and increasing use in computation and molecular design, especially within the area of molecular simulations applied to biomolecular binding and interactions, our focus here. However, force field accuracy remains a concern for many practitioners, and it is often not clear what level of accuracy is really needed for payoffs in a discovery setting. Here, I argue that despite limitations of today's force fields, current simulation tools and force fields now provide the potential for real benefits in a variety of applications. However, these same tools also provide irreproducible results which are often poorly interpreted. Continued progress in the field requires more honesty in assessment and care in evaluation of simulation results, especially with respect to convergence.
Ibraheem, Kareem; Toraih, Eman A; Haddad, Antoine B; Farag, Mahmoud; Randolph, Gregory W; Kandil, Emad
2018-05-14
Minimally invasive parathyroidectomy requires accurate preoperative localization techniques. There is considerable controversy about the effectiveness of selective parathyroid venous sampling (sPVS) in primary hyperparathyroidism (PHPT) patients. The aim of this meta-analysis is to examine the diagnostic accuracy of sPVS as a preoperative localization modality in PHPT. Studies evaluating the diagnostic accuracy of sPVS for PHPT were electronically searched in the PubMed, EMBASE, Web of Science, and Cochrane Controlled Trials Register databases. Two independent authors reviewed the studies, and revised quality assessment of diagnostic accuracy study tool was used for the quality assessment. Study heterogeneity and pooled estimates were calculated. Two hundred and two unique studies were identified. Of those, 12 studies were included in the meta-analysis. Pooled sensitivity, specificity, and positive likelihood ratio (PLR) of sPVS were 74%, 41%, and 1.55, respectively. The area-under-the-receiver operating characteristic curve was 0.684, indicating an average discriminatory ability of sPVS. On comparison between sPVS and noninvasive imaging modalities, sensitivity, PLR, and positive posttest probability were significantly higher in sPVS compared to noninvasive imaging modalities. Interestingly, super-selective venous sampling had the highest sensitivity, accuracy, and positive posttest probability compared to other parathyroid venous sampling techniques. This is the first meta-analysis to examine the accuracy of sPVS in PHPT. sPVS had higher pooled sensitivity when compared to noninvasive modalities in revision parathyroid surgery. However, the invasiveness of this technique does not favor its routine use for preoperative localization. Super-selective venous sampling was the most accurate among all other parathyroid venous sampling techniques. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
Morales, Susana; Barros, Jorge; Echávarri, Orietta; García, Fabián; Osses, Alex; Moya, Claudia; Maino, María Paz; Fischman, Ronit; Núñez, Catalina; Szmulewicz, Tita; Tomicic, Alemka
2017-01-01
In efforts to develop reliable methods to detect the likelihood of impending suicidal behaviors, we have proposed the following. To gain a deeper understanding of the state of suicide risk by determining the combination of variables that distinguishes between groups with and without suicide risk. A study involving 707 patients consulting for mental health issues in three health centers in Greater Santiago, Chile. Using 345 variables, an analysis was carried out with artificial intelligence tools, Cross Industry Standard Process for Data Mining processes, and decision tree techniques. The basic algorithm was top-down, and the most suitable division produced by the tree was selected by using the lowest Gini index as a criterion and by looping it until the condition of belonging to the group with suicidal behavior was fulfilled. Four trees distinguishing the groups were obtained, of which the elements of one were analyzed in greater detail, since this tree included both clinical and personality variables. This specific tree consists of six nodes without suicide risk and eight nodes with suicide risk (tree decision 01, accuracy 0.674, precision 0.652, recall 0.678, specificity 0.670, F measure 0.665, receiver operating characteristic (ROC) area under the curve (AUC) 73.35%; tree decision 02, accuracy 0.669, precision 0.642, recall 0.694, specificity 0.647, F measure 0.667, ROC AUC 68.91%; tree decision 03, accuracy 0.681, precision 0.675, recall 0.638, specificity 0.721, F measure, 0.656, ROC AUC 65.86%; tree decision 04, accuracy 0.714, precision 0.734, recall 0.628, specificity 0.792, F measure 0.677, ROC AUC 58.85%). This study defines the interactions among a group of variables associated with suicidal ideation and behavior. By using these variables, it may be possible to create a quick and easy-to-use tool. As such, psychotherapeutic interventions could be designed to mitigate the impact of these variables on the emotional state of individuals, thereby reducing eventual risk of suicide. Such interventions may reinforce psychological well-being, feelings of self-worth, and reasons for living, for each individual in certain groups of patients.
Automation of a suturing device for minimally invasive surgery.
Göpel, Tobias; Härtl, Felix; Schneider, Armin; Buss, Martin; Feussner, Hubertus
2011-07-01
In minimally invasive surgery, hand suturing is categorized as a challenge in technique as well as in its duration. This calls for an easily manageable tool, permitting an all-purpose, cost-efficient, and secure viscerosynthesis. Such a tool for this field already exists: the Autosuture EndoStitch(®). In a series of studies the potential for the EndoStitch to accelerate suturing has been proven. However, its ergonomics still limits its applicability. The goal of this study was twofold: propose an optimized and partially automated EndoStitch and compare the conventional EndoStitch to the optimized and partially automated EndoStitch with respect to the speed and precision of suturing. Based on the EndoStitch, a partially automated suturing tool has been developed. With the aid of a DC motor, triggered by a button, one can suture by one-fingered handling. Using the partially automated suturing manipulator, 20 surgeons with different levels of laparoscopic experience successfully completed a continuous suture with 10 stitches using the conventional and the partially automated suture manipulator. Before that, each participant was given 1 min of instruction and 1 min for training. Absolute suturing time and stitch accuracy were measured. The quality of the automated EndoStitch with respect to manipulation was tested with the aid of a standardized questionnaire. To compare the two instruments, t tests were used for suturing accuracy and time. Of the 20 surgeons with laparoscopic experience (fewer than 5 laparoscopic interventions, n=9; fewer than 20 laparoscopic interventions, n=7; more than 20 laparoscopic interventions, n=4), there was no significant difference between the two tested systems with respect to stitching accuracy. However, the suturing time was significantly shorter with the Autostitch (P=0.01). The difference in accuracy and speed was not statistically significant considering the laparoscopic experience of the surgeons. The weight and size of the Autostitch have been criticized as well as its cable. However, the comfortable handhold, automatic needle change, and ergonomic manipulation have been rated positive. Partially automated suturing in minimally invasive surgery offers advantages with respect to the speed of operation and ergonomics. Ongoing work in this field has to concentrate on minimization, implementation in robotic systems, and development of new operation methods (NOTES).
Design and Testing of a Tool for Evaluating the Quality of Diabetes Consumer-Information Web Sites
Steinwachs, Donald; Rubin, Haya R
2003-01-01
Background Most existing tools for measuring the quality of Internet health information focus almost exclusively on structural criteria or other proxies for quality information rather than evaluating actual accuracy and comprehensiveness. Objective This research sought to develop a new performance-measurement tool for evaluating the quality of Internet health information, test the validity and reliability of the tool, and assess the variability in diabetes Web site quality. Methods An objective, systematic tool was developed to evaluate Internet diabetes information based on a quality-of-care measurement framework. The principal investigator developed an abstraction tool and trained an external reviewer on its use. The tool included 7 structural measures and 34 performance measures created by using evidence-based practice guidelines and experts' judgments of accuracy and comprehensiveness. Results Substantial variation existed in all categories, with overall scores following a normal distribution and ranging from 15% to 95% (mean was 50% and median was 51%). Lin's concordance correlation coefficient to assess agreement between raters produced a rho of 0.761 (Pearson's r of 0.769), suggesting moderate to high agreement. The average agreement between raters for the performance measures was 0.80. Conclusions Diabetes Web site quality varies widely. Alpha testing of this new tool suggests that it could become a reliable and valid method for evaluating the quality of Internet health sites. Such an instrument could help lay people distinguish between beneficial and misleading information. PMID:14713658
ERIC Educational Resources Information Center
Rellensmann, Johanna; Schukajlow, Stanislaw; Leopold, Claudia
2017-01-01
Drawing strategies are widely used as a powerful tool for promoting students' learning and problem solving. In this article, we report the results of an inferential mediation analysis that was applied to investigate the roles that strategic knowledge about drawing and the accuracy of different types of drawings play in mathematical modelling…
The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency
ERIC Educational Resources Information Center
Leal, Dorothy J.
2005-01-01
The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…
2007-03-01
task termine if in travel la ons, visual recognitio nd information proce visual recognitio uation will yiee Δ = (0accuracy .37 * 06463) + (0.63 * 0.11...mission Figure 2. User-defined stresso err int face . 8 Figure 3. Stressor levels in IMPRINT. Figure 4. Accuracy stressor definition
Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data
ERIC Educational Resources Information Center
Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy
2016-01-01
Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
BioLemmatizer: a lemmatization tool for morphological processing of biomedical text
2012-01-01
Background The wide variety of morphological variants of domain-specific technical terms contributes to the complexity of performing natural language processing of the scientific literature related to molecular biology. For morphological analysis of these texts, lemmatization has been actively applied in the recent biomedical research. Results In this work, we developed a domain-specific lemmatization tool, BioLemmatizer, for the morphological analysis of biomedical literature. The tool focuses on the inflectional morphology of English and is based on the general English lemmatization tool MorphAdorner. The BioLemmatizer is further tailored to the biological domain through incorporation of several published lexical resources. It retrieves lemmas based on the use of a word lexicon, and defines a set of rules that transform a word to a lemma if it is not encountered in the lexicon. An innovative aspect of the BioLemmatizer is the use of a hierarchical strategy for searching the lexicon, which enables the discovery of the correct lemma even if the input Part-of-Speech information is inaccurate. The BioLemmatizer achieves an accuracy of 97.5% in lemmatizing an evaluation set prepared from the CRAFT corpus, a collection of full-text biomedical articles, and an accuracy of 97.6% on the LLL05 corpus. The contribution of the BioLemmatizer to accuracy improvement of a practical information extraction task is further demonstrated when it is used as a component in a biomedical text mining system. Conclusions The BioLemmatizer outperforms other tools when compared with eight existing lemmatizers. The BioLemmatizer is released as an open source software and can be downloaded from http://biolemmatizer.sourceforge.net. PMID:22464129
Accuracy of Brief Screening Tools for Identifying Postpartum Depression Among Adolescent Mothers
Venkatesh, Kartik K.; Zlotnick, Caron; Triche, Elizabeth W.; Ware, Crystal
2014-01-01
OBJECTIVE: To evaluate the accuracy of the Edinburgh Postnatal Depression Scale (EPDS) and 3 subscales for identifying postpartum depression among primiparous adolescent mothers. METHODS: Mothers enrolled in a randomized controlled trial to prevent postpartum depression completed a psychiatric diagnostic interview and the 10-item EPDS at 6 weeks, 3 months, and 6 months postpartum. Three subscales of the EPDS were assessed as brief screening tools: 3-item anxiety subscale (EPDS-3), 7-item depressive symptoms subscale (EPDS-7), and 2-item subscale (EPDS-2) that resemble the Patient Health Questionnaire-2. Receiver operating characteristic curves and the areas under the curves for each tool were compared to assess accuracy. The sensitivities and specificities of each screening tool were calculated in comparison with diagnostic criteria for a major depressive disorder. Repeated-measures longitudinal analytical techniques were used. RESULTS: A total of 106 women contributed 289 postpartum visits; 18% of the women met criteria for incident postpartum depression by psychiatric diagnostic interview. When used as continuous measures, the full EPDS, EPDS-7, and EPDS-2 performed equally well (area under the curve >0.9). Optimal cutoff scores for a positive depression screen for the EPDS and EPDS-7 were lower (≥9 and ≥7, respectively) than currently recommended cutoff scores (≥10). At optimal cutoff scores, the EPDS and EPDS-7 both had sensitivities of 90% and specificities of >85%. CONCLUSIONS: The EPDS, EPDS-7, and EPDS-2 are highly accurate at identifying postpartum depression among adolescent mothers. In primary care pediatric settings, the EPDS and its shorter subscales have potential for use as effective depression screening tools. PMID:24344102
Wang, X W; Pappoe, F; Huang, Y; Cheng, X W; Xu, D F; Wang, H; Xu, Y H
2015-01-01
The Xpert MTB/RIF assay has been recommended by WHO to replace conventional microscopy, culture, and drug resistance tests. It simultaneously detects both Mycobacterium tuberculosis infection (TB) and resistance to rifampicin (RIF) within two hours. The objective was to review the available research studies on the accuracy of the Xpert MTB/RIF assay for diagnosing pulmonary TB and RIF-resistance in children. A comprehensive search of Pubmed and Embase was performed up to October 28, 2014. We identified published articles estimating the diagnostic accuracy of the Xpert MTB/RIF assay in children with or without HIV using culture or culture plus clinical TB as standard reference. QUADAS-2 tool was used to evaluate the quality of the studies. A summary estimation for sensitivity, specificity, diagnostic odds ratios (DOR), and the area under the summary ROC curve (AUC) was performed. Meta-analysis was used to establish the overall accuracy. 11 diagnostic studies with 3801 patients were included in the systematic review. The overall analysis revealed a moderate sensitivity and high specificity of 65% (95% CI: 61 - 69%) and 99% (95% CI: 98 - 99%), respectively, and a pooled diagnostic odds ratio of 164.09 (95% CI: 111.89 - 240.64). The AUC value was found to be 0.94. The pooled sensitivity and specificity for paediatric rifampicin resistance were 94.0% (95% CI: 80.0 - 93.0%) and 99.0% (95% CI: 95.0 - 98.0%), respectively. Hence, the Xpert MTB/RIF assay has good diagnostic and rifampicin performance for paediatric pulmonary tuberculosis. The Xpert MTB/RIF is sensitive and specific for diagnosing paediatric pulmonary TB. It is also effective in detecting rifamnicin resistance. It can, therefore, be used as an initial diagnostic tool.
Analysis of laparoscopy in trauma.
Villavicencio, R T; Aucar, J A
1999-07-01
The optimum roles for laparoscopy in trauma have yet to be established. To date, reviews of laparoscopy in trauma have been primarily descriptive rather than analytic. This article analyzes the results of laparoscopy in trauma. Outcome analysis was done by reviewing 37 studies with more than 1,900 trauma patients, and laparoscopy was analyzed as a screening, diagnostic, or therapeutic tool. Laparoscopy was regarded as a screening tool if it was used to detect or exclude a positive finding (eg, hemoperitoneum, organ injury, gastrointestinal spillage, peritoneal penetration) that required operative exploration or repair. Laparoscopy was regarded as a diagnostic tool when it was used to identify all injuries, rather than as a screening tool to identify the first indication for a laparotomy. It was regarded as a diagnostic tool only in studies that mandated a laparotomy (gold standard) after laparoscopy to confirm the diagnostic accuracy of laparoscopic findings. Costs and charges for using laparoscopy in trauma were analyzed when feasible. As a screening tool, laparoscopy missed 1% of injuries and helped prevent 63% of patients from having a trauma laparotomy. When used as a diagnostic tool, laparoscopy had a 41% to 77% missed injury rate per patient. Overall, laparoscopy carried a 1% procedure-related complication rate. Cost-effectiveness has not been uniformly proved in studies comparing laparoscopy and laparotomy. Laparoscopy has been applied safely and effectively as a screening tool in stable patients with acute trauma. Because of the large number of missed injuries when used as a diagnostic tool, its value in this context is limited. Laparoscopy has been reported infrequently as a therapeutic tool in selected patients, and its use in this context requires further study.
Simulation-based comprehensive benchmarking of RNA-seq aligners
Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R
2018-01-01
Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783
NASA Astrophysics Data System (ADS)
Dubroca, Guilhem; Richert, Michaël.; Loiseaux, Didier; Caron, Jérôme; Bézy, Jean-Loup
2015-09-01
To increase the accuracy of earth-observation spectro-imagers, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred device in space instrument is the so-called polarization scrambler. It is made of birefringent crystal wedges arranged in a single or dual Babinet. Today, with required radiometric accuracies of the order of 0.1%, it is necessary to develop tools to find optimal and low sensitivity solutions quickly and to measure the performances with a high level of accuracy.
Overview of the Development for a Suite of Low-Thrust Trajectory Analysis Tools
NASA Technical Reports Server (NTRS)
Kos, Larry D.; Polsgrove, Tara; Hopkins, Randall; Thomas, Dan; Sims, Jon A.
2006-01-01
A NASA intercenter team has developed a suite of low-thrust trajectory analysis tools to make a significant improvement in three major facets of low-thrust trajectory and mission analysis. These are: 1) ease of use, 2) ability to more robustly converge to solutions, and 3) higher fidelity modeling and accuracy of results. Due mostly to the short duration of the development, the team concluded that a suite of tools was preferred over having one integrated tool. This tool-suite, their characteristics, and their applicability will be described. Trajectory analysts can read this paper and determine which tool is most appropriate for their problem.
NASA Astrophysics Data System (ADS)
Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane
2018-04-01
The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.
Achieving optimum diffraction based overlay performance
NASA Astrophysics Data System (ADS)
Leray, Philippe; Laidler, David; Cheng, Shaunee; Coogans, Martyn; Fuchs, Andreas; Ponomarenko, Mariya; van der Schaar, Maurits; Vanoppen, Peter
2010-03-01
Diffraction Based Overlay (DBO) metrology has been shown to have significantly reduced Total Measurement Uncertainty (TMU) compared to Image Based Overlay (IBO), primarily due to having no measurable Tool Induced Shift (TIS). However, the advantages of having no measurable TIS can be outweighed by increased susceptibility to WIS (Wafer Induced Shift) caused by target damage, process non-uniformities and variations. The path to optimum DBO performance lies in having well characterized metrology targets, which are insensitive to process non-uniformities and variations, in combination with optimized recipes which take advantage of advanced DBO designs. In this work we examine the impact of different degrees of process non-uniformity and target damage on DBO measurement gratings and study their impact on overlay measurement accuracy and precision. Multiple wavelength and dual polarization scatterometry are used to characterize the DBO design performance over the range of process variation. In conclusion, we describe the robustness of DBO metrology to target damage and show how to exploit the measurement capability of a multiple wavelength, dual polarization scatterometry tool to ensure the required measurement accuracy for current and future technology nodes.
Fitting Flux Ropes to a Global MHD Solution: A Comparison of Techniques. Appendix 1
NASA Technical Reports Server (NTRS)
Riley, Pete; Linker, J. A.; Lionello, R.; Mikic, Z.; Odstrcil, D.; Hidalgo, M. A.; Cid, C.; Hu, Q.; Lepping, R. P.; Lynch, B. J.
2004-01-01
Flux rope fitting (FRF) techniques are an invaluable tool for extracting information about the properties of a subclass of CMEs in the solar wind. However, it has proven difficult to assess their accuracy since the underlying global structure of the CME cannot be independently determined from the data. In contrast, large-scale MHD simulations of CME evolution can provide both a global view as well as localized time series at specific points in space. In this study we apply 5 different fitting techniques to 2 hypothetical time series derived from MHD simulation results. Independent teams performed the analysis of the events in "blind tests", for which no information, other than the time series, was provided. F rom the results, we infer the following: (1) Accuracy decreases markedly with increasingly glancing encounters; (2) Correct identification of the boundaries of the flux rope can be a significant limiter; and (3) Results from techniques that infer global morphology must be viewed with caution. In spite of these limitations, FRF techniques remain a useful tool for describing in situ observations of flux rope CMEs.
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
Comparison of quality control software tools for diffusion tensor imaging.
Liu, Bilan; Zhu, Tong; Zhong, Jianhui
2015-04-01
Image quality of diffusion tensor imaging (DTI) is critical for image interpretation, diagnostic accuracy and efficiency. However, DTI is susceptible to numerous detrimental artifacts that may impair the reliability and validity of the obtained data. Although many quality control (QC) software tools are being developed and are widely used and each has its different tradeoffs, there is still no general agreement on an image quality control routine for DTIs, and the practical impact of these tradeoffs is not well studied. An objective comparison that identifies the pros and cons of each of the QC tools will be helpful for the users to make the best choice among tools for specific DTI applications. This study aims to quantitatively compare the effectiveness of three popular QC tools including DTI studio (Johns Hopkins University), DTIprep (University of North Carolina at Chapel Hill, University of Iowa and University of Utah) and TORTOISE (National Institute of Health). Both synthetic and in vivo human brain data were used to quantify adverse effects of major DTI artifacts to tensor calculation as well as the effectiveness of different QC tools in identifying and correcting these artifacts. The technical basis of each tool was discussed, and the ways in which particular techniques affect the output of each of the tools were analyzed. The different functions and I/O formats that three QC tools provide for building a general DTI processing pipeline and integration with other popular image processing tools were also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
ERIC Educational Resources Information Center
Ford, Jeremy W.; Missall, Kristen N.; Hosp, John L.; Kuhle, Jennifer L.
2016-01-01
Advances in maze selection curriculum-based measurement have led to several published tools with technical information for interpretation (e.g., norms, benchmarks, cut-scores, classification accuracy) that have increased their usefulness for universal screening. A range of scoring practices have emerged for evaluating student performance on maze…
Efficient prediction of human protein-protein interactions at a global scale.
Schoenrock, Andrew; Samanfar, Bahram; Pitre, Sylvain; Hooshyar, Mohsen; Jin, Ke; Phillips, Charles A; Wang, Hui; Phanse, Sadhna; Omidi, Katayoun; Gui, Yuan; Alamgir, Md; Wong, Alex; Barrenäs, Fredrik; Babu, Mohan; Benson, Mikael; Langston, Michael A; Green, James R; Dehne, Frank; Golshani, Ashkan
2014-12-10
Our knowledge of global protein-protein interaction (PPI) networks in complex organisms such as humans is hindered by technical limitations of current methods. On the basis of short co-occurring polypeptide regions, we developed a tool called MP-PIPE capable of predicting a global human PPI network within 3 months. With a recall of 23% at a precision of 82.1%, we predicted 172,132 putative PPIs. We demonstrate the usefulness of these predictions through a range of experiments. The speed and accuracy associated with MP-PIPE can make this a potential tool to study individual human PPI networks (from genomic sequences alone) for personalized medicine.
PROACT user's guide: how to use the pallet recovery opportunity analysis computer tool
E. Bradley Hager; A.L. Hammett; Philip A. Araman
2003-01-01
Pallet recovery projects are environmentally responsible and offer promising business opportunities. The Pallet Recovery Opportunity Analysis Computer Tool (PROACT) assesses the operational and financial feasibility of potential pallet recovery projects. The use of project specific information supplied by the user increases the accuracy and the validity of the...
Development of a patient-specific surgical simulator for pediatric laparoscopic procedures.
Saber, Nikoo R; Menon, Vinay; St-Pierre, Jean C; Looi, Thomas; Drake, James M; Cyril, Xavier
2014-01-01
The purpose of this study is to develop and evaluate a pediatric patient-specific surgical simulator for the planning, practice, and validation of laparoscopic surgical procedures prior to intervention, initially focusing on the choledochal cyst resection and reconstruction scenario. The simulator is comprised of software elements including a deformable body physics engine, virtual surgical tools, and abdominal organs. Hardware components such as haptics-enabled hand controllers and a representative endoscopic tool have also been integrated. The prototype is able to perform a number of surgical tasks and further development work is under way to simulate the complete procedure with acceptable fidelity and accuracy.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-03-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-06-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
SU-F-J-30: Application of Intra-Fractional Imaging for Pretreatment CBCT of Breath-Hold Lung SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, D; Jermoumi, M; Mehta, V
2016-06-15
Purpose: Clinical implementation of gated lung SBRT requires tools to verify the accuracy of the target positioning on a daily basis. This is a particular challenge on Elekta linacs where the XVI imaging system does not interface directly to any commercial gating solution. In this study, we used the Elekta’s intra-fractional imaging functionality to perform the pretreatment CBCT verifications and evaluated both the image quality and gating accuracy. Methods: To use intrafraction imaging tools for pretreatment verifications, we planned a 360-degree arc with 1mmx5mm MLC opening. This beam was designed to drive the gantry during the gated CBCT data collection.more » A Catphan phantom was used to evaluate the image quality for the intra-fractional CBCT. A CIRS lung phantom with a 3cm sphereinsert and a moving chest plate were programmed with a simulated breathhold breathing pattern was used to check the gating accuracy. A C-Rad CatalystHD surface mapping system was used to provide the gating signal. Results: The total delivery time of the arc was 90 seconds. The uniformity and low contrast resolution for the intra-fractional CBCT was 1.5% and 3.6%, respectively. The values for the regular CBCT were 1.7% and 2.5%, respectively. The spatial resolution was 7 line-pairs/cm and the 3D spatial integrity was less than 1mm for the intra-fractional CBCT. The gated CBCT clearly demonstrated the accuracy of the gating image acquisition. Conclusion: The intra-fraction CBCT capabilities on an Elekta linac can be used to acquire pre-treatment gated images to verify the accuracy patient positioning. This imaging capability should provide for accurate patient alignments for the delivery of lung SBRT. This research was partially supported by Elekta.« less
Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min
2012-01-01
A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.
Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min
2012-01-01
A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689
Balancing geo-privacy and spatial patterns in epidemiological studies.
Chen, Chien-Chou; Chuang, Jen-Hsiang; Wang, Da-Wei; Wang, Chien-Min; Lin, Bo-Cheng; Chan, Ta-Chien
2017-11-08
To balance the protection of geo-privacy and the accuracy of spatial patterns, we developed a geo-spatial tool (GeoMasker) intended to mask the residential locations of patients or cases in a geographic information system (GIS). To elucidate the effects of geo-masking parameters, we applied 2010 dengue epidemic data from Taiwan testing the tool's performance in an empirical situation. The similarity of pre- and post-spatial patterns was measured by D statistics under a 95% confidence interval. In the empirical study, different magnitudes of anonymisation (estimated Kanonymity ≥10 and 100) were achieved and different degrees of agreement on the pre- and post-patterns were evaluated. The application is beneficial for public health workers and researchers when processing data with individuals' spatial information.
Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules
Desai, Aarti; Singh, Vivek K.; Jere, Abhay
2016-01-01
Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321
SMART micro-scissors with dual motors and OCT sensors (Conference Presentation)
NASA Astrophysics Data System (ADS)
Yeo, Chaebeom; Jang, Seonjin; Park, Hyun-cheol; Gehlbach, Peter L.; Song, Cheol
2017-02-01
Various end-effectors of microsurgical instruments have been developed and studied. Also, many approaches to stabilize the tool-tip using robotics have been studied such as the steady hand robot system, Micron, and SMART system. In our previous study, the horizontal SMART micro-scissors with a common path swept source OCT distance and one linear piezoelectric (PZT) motor was demonstrated as a microsurgical system. Because the outer needle is connected with a mechanical handle and moved to engage the tool tip manually, the tool tip position is instantaneously changed during the engaging. The undesirable motion can make unexpected tissue damages and low surgical accuracy. In this study, we suggest a prototype horizontal SMART micro-scissors which has dual OCT sensors and two motors to improve the tremor cancellation. Dual OCT sensors provide two distance information. Front OCT sensor detects a distance from the sample surface to the tool tip. Rear OCT sensors gives current PZT motor movement, acting like a motor encoder. The PZT motor can compensate the hand tremor with a feedback loop control. The manual engaging of tool tip in previous SMART system is replaced by electrical engaging using a squiggle motor. Compared with previous study, this study showed better performance in the hand tremor reduction. From the result, the SMART with automatic engaging may become increasingly valuable in microsurgical instruments.
ERIC Educational Resources Information Center
Lofthouse, Rachael E.; Lindsay, William R.; Totsika, Vasiliki; Hastings, Richard P.; Boer, Douglas P.; Haaven, James L.
2013-01-01
Background: The purpose of the present study was to add to the literature on the predictive accuracy of a dynamic intellectual disability specific risk assessment tool. Method: A dynamic risk assessment for sexual reoffending (ARMIDILO-S), a static risk assessment for sexual offending (STATIC-99), and a static risk assessment for violence…
ERIC Educational Resources Information Center
Ranalli, Jim; Link, Stephanie; Chukharev-Hudilainen, Evgeny
2017-01-01
An increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggest growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses. A recent argument-based validation framework…
Diagnosing Chronic Pancreatitis: Comparison and Evaluation of Different Diagnostic Tools.
Issa, Yama; van Santvoort, Hjalmar C; van Dieren, Susan; Besselink, Marc G; Boermeester, Marja A; Ahmed Ali, Usama
2017-10-01
This study aims to compare the M-ANNHEIM, Büchler, and Lüneburg diagnostic tools for chronic pancreatitis (CP). A cross-sectional analysis of the development of CP was performed in a prospectively collected multicenter cohort including 669 patients after a first episode of acute pancreatitis. We compared the individual components of the M-ANNHEIM, Büchler, and Lüneburg tools, the agreement between tools, and estimated diagnostic accuracy using Bayesian latent-class analysis. A total of 669 patients with acute pancreatitis followed-up for a median period of 57 (interquartile range, 42-70) months were included. Chronic pancreatitis was diagnosed in 50 patients (7%), 59 patients (9%), and 61 patients (9%) by the M-ANNHEIM, Lüneburg, and Büchler tools, respectively. The overall agreement between these tools was substantial (κ = 0.75). Differences between the tools regarding the following criteria led to significant changes in the total number of diagnoses of CP: abdominal pain, recurrent pancreatitis, moderate to marked ductal lesions, endocrine and exocrine insufficiency, pancreatic calcifications, and pancreatic pseudocysts. The Büchler tool had the highest sensitivity (94%), followed by the M-ANNHEIM (87%), and finally the Lüneburg tool (81%). Differences between diagnostic tools for CP are mainly attributed to presence of clinical symptoms, endocrine insufficiency, and certain morphological complications.
Storey, Helen L; van Pelt, Maurits H; Bun, Socheath; Daily, Frances; Neogi, Tina; Thompson, Matthew; McGuire, Helen; Weigl, Bernhard H
2018-03-22
Screening for diabetes in low-resource countries is a growing challenge, necessitating tests that are resource and context appropriate. The aim of this study was to determine the diagnostic accuracy of a self-administered urine glucose test strip compared with alternative diabetes screening tools in a low-resource setting of Cambodia. Prospective cross-sectional study. Members of the Borey Santepheap Community in Cambodia (Phnom Penh Municipality, District Dangkao, Commune Chom Chao). All households on randomly selected streets were invited to participate, and adults at least 18 years of age living in the study area were eligible for inclusion. The accuracy of self-administered urine glucose test strip positivity, Hemoglobin A1c (HbA1c)>6.5% and capillary fasting blood glucose (cFBG) measurement ≥126 mg/dL were assessed against a composite reference standard of cFBGmeasurement ≥200 mg/dL or venous blood glucose 2 hours after oral glucose tolerance test (OGTT) ≥200 mg/dL. Of the 1289 participants, 234 (18%) had diabetes based on either cFBG measurement (74, 32%) or the OGTT (160, 68%). The urine glucose test strip was 14% sensitive and 99% specific and failed to identify 201 individuals with diabetes while falsely identifying 7 without diabetes. Those missed by the urine glucose test strip had lower venous fasting blood glucose, lower venous blood glucose 2 hours after OGTT and lower HbA1c compared with those correctly diagnosed. Low cost, easy to use diabetes tools are essential for low-resource communities with minimal infrastructure. While the urine glucose test strip may identify persons with diabetes that might otherwise go undiagnosed in these settings, its poor sensitivity cannot be ignored. The massive burden of diabetes in low-resource settings demands improvements in test technologies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Foo, Wen Chin; Widjaja, Effendi; Khong, Yuet Mei; Gokhale, Rajeev; Chan, Sui Yung
2018-02-20
Extemporaneous oral preparations are routinely compounded in the pharmacy due to a lack of suitable formulations for special populations. Such small-scale pharmacy preparations also present an avenue for individualized pharmacotherapy. Orodispersible films (ODF) have increasingly been evaluated as a suitable dosage form for extemporaneous oral preparations. Nevertheless, as with all other extemporaneous preparations, safety and quality remain a concern. Although the United States Pharmacopeia (USP) recommends analytical testing of compounded preparations for quality assurance, pharmaceutical assays are typically not routinely performed for such non-sterile pharmacy preparations, due to the complexity and high cost of conventional assay methods such as high performance liquid chromatography (HPLC). Spectroscopic methods including Raman, infrared and near-infrared spectroscopy have been successfully applied as quality control tools in the industry. The state-of-art benchtop spectrometers used in those studies have the advantage of superior resolution and performance, but are not suitable for use in a small-scale pharmacy setting. In this study, we investigated the application of a miniaturized near infrared (NIR) spectrometer as a quality control tool for identification and quantification of drug content in extemporaneous ODFs. Miniaturized near infrared (NIR) spectroscopy is suitable for small-scale pharmacy applications in view of its small size, portability, simple user interface, rapid measurement and real-time prediction results. Nevertheless, the challenge with miniaturized NIR spectroscopy is its lower resolution compared to state-of-art benchtop equipment. We have successfully developed NIR spectroscopy calibration models for identification of ODFs containing five different drugs, and quantification of drug content in ODFs containing 2-10mg ondansetron (OND). The qualitative model for drug identification produced 100% prediction accuracy. The quantitative model to predict OND drug content in ODFs was divided into two calibrations for improved accuracy: Calibration I and II covered the 2-4mg and 4-10mg ranges respectively. Validation was performed for method accuracy, linearity and precision. In conclusion, this study demonstrates the feasibility of miniaturized NIR spectroscopy as a quality control tool for small-scale, pharmacy preparations. Due to its non-destructive nature, every dosage unit can be tested thus affording positive impact on patient safety. Copyright © 2017 Elsevier B.V. All rights reserved.
Transportable Manned and Robotic Digital Geophysical Mapping Tow Vehicle, Phase 1
2007-08-01
by using the UX PROCESS QC/QA tools to evaluate quality. Areas evaluated included induced noise, position and track accuracy, synchronization/latency... tools . To gain additional data on productivity and the effect of alternate direction of travel we mapped an unobstructed subset of the Grid 1-4 area...independently evaluated by using the UX PROCESS QC/QA tools to evaluate quality. Areas evaluated included induced noise, position and track
MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data
NASA Astrophysics Data System (ADS)
Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno
2017-04-01
Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.
Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.
Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D
2018-04-12
Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13 overtriaged) following implementation of the web tool (p = 0.017 for combined errors). Overtriage dropped further from baseline to 10/150 after the second test of change (p = 0.005). The total number of triaged patients dropped from 92.3/week to 75.5/week after the second test of change. There was no statistically significant change in the undertriage rate. The combination of web tool implementation and protocol refinement decreased the combined triage error rate by over 50% (from 16.7%-7.9%). We developed and tested a web tool that improved triage accuracy, and provided a sustainable method to enact future quality improvement. This web tool and QI framework would be easily expandable to other hospitals. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart
2017-06-01
This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.
Abdalla, G; Fawzi Matuk, R; Venugopal, V; Verde, F; Magnuson, T H; Schweitzer, M A; Steele, K E
2015-08-01
To search the literature for further evidence for the use of magnetic resonance venography (MRV) in the detection of suspected DVT and to re-evaluate the accuracy of MRV in the detection of suspected deep vein thrombosis (DVT). PubMed, EMBASE, Scopus, Cochrane, and Web of Science were searched. Study quality and the risk of bias were evaluated using the QUADAS 2. A random effects meta-analysis including subgroup and sensitivity analyses were performed. The search resulted in 23 observational studies all from academic centres. Sixteen articles were included in the meta-analysis. The summary estimates for MRV as a diagnostic non-invasive tool revealed a sensitivity of 93% (95% confidence interval [CI]: 89% to 95%) and specificity of 96% (95% CI: 94% to 97%). The heterogeneity of the studies was high. Inconsistency (I2) for sensitivity and specificity was 80.7% and 77.9%, respectively. Further studies investigating the use of MRV in the detection of suspected DVT did not offer further evidence to support the replacement of ultrasound with MRV as the first-line investigation. However, MRV may offer an alternative tool in the detection/diagnosis of DVT for whom ultrasound is inadequate or not feasible (such as in the obese patient). Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Sikder, Shameema; Luo, Jia; Banerjee, P Pat; Luciano, Cristian; Kania, Patrick; Song, Jonathan C; Kahtani, Eman S; Edward, Deepak P; Towerki, Abdul-Elah Al
2015-01-01
To evaluate a haptic-based simulator, MicroVisTouch™, as an assessment tool for capsulorhexis performance in cataract surgery. The study is a prospective, unmasked, nonrandomized dual academic institution study conducted at the Wilmer Eye Institute at Johns Hopkins Medical Center (Baltimore, MD, USA) and King Khaled Eye Specialist Hospital (Riyadh, Saudi Arabia). This prospective study evaluated capsulorhexis simulator performance in 78 ophthalmology residents in the US and Saudi Arabia in the first round of testing and 40 residents in a second round for follow-up. Four variables (circularity, accuracy, fluency, and overall) were tested by the simulator and graded on a 0-100 scale. Circularity (42%), accuracy (55%), and fluency (3%) were compiled to give an overall score. Capsulorhexis performance was retested in the original cohort 6 months after baseline assessment. Average scores in all measured metrics demonstrated statistically significant improvement (except for circularity, which trended toward improvement) after baseline assessment. A reduction in standard deviation and improvement in process capability indices over the 6-month period was also observed. An interval objective improvement in capsulorhexis skill on a haptic-enabled cataract surgery simulator was associated with intervening operating room experience. Further work investigating the role of formalized simulator training programs requiring independent simulator use must be studied to determine its usefulness as an evaluation tool.
Northern Hemisphere observations of ICRF sources on the USNO stellar catalogue frame
NASA Astrophysics Data System (ADS)
Fienga, A.; Andrei, A. H.
2004-06-01
The most recent USNO stellar catalogue, the USNO B1.0 (Monet et al. \\cite{Monet03}), provides positions for 1 042 618 261 objects, with a published astrometric accuracy of 200 mas and five-band magnitudes with a 0.3 mag accuracy. Its completeness is believed to be up to magnitude 21th in V-band. Such a catalogue would be a very good tool for astrometric reduction. This work investigates the accuracy of the USNO B1.0 link to ICRF and give an estimation of its internal and external accuracies by comparison with different catalogues, and by computation of ICRF sources using USNO B1.0 star positions.
NASA Astrophysics Data System (ADS)
Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés
2014-12-01
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Nasiri, Jaber; Naghavi, Mohammad Reza; Kayvanjoo, Amir Hossein; Nasiri, Mojtaba; Ebrahimi, Mansour
2015-03-07
For the first time, prediction accuracies of some supervised and unsupervised algorithms were evaluated in an SSR-based DNA fingerprinting study of a pea collection containing 20 cultivars and 57 wild samples. In general, according to the 10 attribute weighting models, the SSR alleles of PEAPHTAP-2 and PSBLOX13.2-1 were the two most important attributes to generate discrimination among eight different species and subspecies of genus Pisum. In addition, K-Medoids unsupervised clustering run on Chi squared dataset exhibited the best prediction accuracy (83.12%), while the lowest accuracy (25.97%) gained as K-Means model ran on FCdb database. Irrespective of some fluctuations, the overall accuracies of tree induction models were significantly high for many algorithms, and the attributes PSBLOX13.2-3 and PEAPHTAP could successfully detach Pisum fulvum accessions and cultivars from the others when two selected decision trees were taken into account. Meanwhile, the other used supervised algorithms exhibited overall reliable accuracies, even though in some rare cases, they gave us low amounts of accuracies. Our results, altogether, demonstrate promising applications of both supervised and unsupervised algorithms to provide suitable data mining tools regarding accurate fingerprinting of different species and subspecies of genus Pisum, as a fundamental priority task in breeding programs of the crop. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tehan, Peta Ellen; Santos, Derek; Chuter, Vivienne Helaine
2016-08-01
The toe-brachial index (TBI) is used as an adjunct to the ankle-brachial index (ABI) for non-invasive lower limb vascular screening. With increasing evidence suggesting limitations of the ABI for diagnosis of vascular complications, particularly in specific populations including diabetes cohorts, the TBI is being used more widely. The aim of this review was to determine the sensitivity and specificity of the TBI for detecting peripheral artery disease (PAD) in populations at risk of this disease. A database search was conducted to identify current work relating to the sensitivity and specificity of toe-brachial indices up to July 2015. Only studies using valid diagnostic imaging as a reference standard were included. The QUADAS-2 tool was used to critically appraise included articles. Seven studies met the inclusion criteria. Sensitivity of the TBI for PAD was reported in all seven studies and ranged from 45% to 100%; specificity was reported by five studies only and ranged from 16% to 100%. In conclusion, this review suggests that the TBI has variable diagnostic accuracy for the presence of PAD in specific populations at risk of developing the disease. There was a notable lack of large-scale diagnostic accuracy studies determining the diagnostic accuracy of the TBI in detecting PAD in different at-risk cohorts. However, standardised normal values need to be established for the TBI to conclusively determine the diagnostic accuracy of this test. © The Author(s) 2016.
Mandibular canine: A tool for sex identification in forensic odontology.
Kumawat, Ramniwas M; Dindgire, Sarika L; Gadhari, Mangesh; Khobragade, Pratima G; Kadoo, Priyanka S; Yadav, Pradeep
2017-01-01
The aim of this study was to investigate the accuracy of mandibular canine index (MCI) and mandibular mesiodistal odontometrics in sex identification in the age group of 17-25 years in central Indian population. The study sample comprised total 300 individuals (150 males and 150 females) of an age group ranging from 17 to 25 years of central Indian population. The maximum mesiodistal diameter of mandibular canines, the linear distance between the tips of mandibular canines, was measured using digital vernier caliper on the study models. Overall sex could be predicted accurately in 79.66% (81.33% males and 78% females) of the population by MCI. Whereas, considering the mandibular canine width for sex identification, the overall accuracy was 75% for the right mandibular canine and 73% for the left mandibular canine observed. Sexual dimorphism of canine is population specific, and among the Indian population, MCI and mesiodistal dimension of mandibular canine can aid in sex determination.
The verification of LANDSAT data in the geographical analysis of wetlands in west Tennessee
NASA Technical Reports Server (NTRS)
Rehder, J.; Quattrochi, D. A.
1978-01-01
The reliability of LANDSAT imagery as a medium for identifying, delimiting, monitoring, measuring, and mapping wetlands in west Tennessee was assessed to verify LANDSAT as an accurate, efficient cartographic tool that could be employed by a wide range of users to study wetland dynamics. The verification procedure was based on the visual interpretation and measurement of multispectral imagery. The accuracy testing procedure was predicated on surrogate ground truth data gleaned from medium altitude imagery of the wetlands. Fourteen sites or case study areas were selected from individual 9 x 9 inch photo frames on the aerial photography. These sites were then used as data control calibration parameters for assessing the cartography accuracy of the LANDSAT imagery. An analysis of results obtained from the verification tests indicated that 1:250,000 scale LANDSAT data were the most reliable scale of imagery for visually mapping and measuring wetlands using the area grid technique. The mean areal percentage of accuracy was 93.54 percent (real) and 96.93 percent (absolute). As a test of accuracy, the LANDSAT 1:250,000 scale overall wetland measurements were compared with an area cell mensuration of the swamplands from 1:130,000 scale color infrared U-2 aircraft imagery. The comparative totals substantiated the results from the LANDSAT verification procedure.
Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.
Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan
2016-09-01
Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cargnin, Sarah; Jommi, Claudio; Canonico, Pier Luigi; Genazzani, Armando A; Terrazzino, Salvatore
2014-05-01
To determine diagnostic accuracy of HLA-B*57:01 testing for prediction of abacavir-induced hypersensitivity and to quantify the clinical benefit of pretreatment screening through a meta-analytic review of published studies. A comprehensive search was performed up to June 2013. The methodological quality of relevant studies was assessed by the QUADAS-2 tool. The pooled diagnostic estimates were calculated using a random effect model. Despite the presence of heterogeneity in sensitivity or specificity estimates, the pooled diagnostic odds ratio to detect abacavir-induced hypersensitivity on the basis of clinical criteria was 33.07 (95% CI: 22.33-48.97, I(2): 13.9%), while diagnostic odds ratio for detection of immunologically confirmed abacavir hypersensitivity was 1141 (95% CI: 409-3181, I(2): 0%). Pooled analysis of risk ratio showed that prospective HLA-B*57:01 testing significantly reduced the incidence of abacavir-induced hypersensitivity. This meta-analysis demonstrates an excellent diagnostic accuracy of HLA-B*57:01 testing to detect immunologically confirmed abacavir hypersensitivity and corroborates existing recommendations.
Schrangl, Patrick; Reiterer, Florian; Heinemann, Lutz; Freckmann, Guido; Del Re, Luigi
2018-05-18
Systems for continuous glucose monitoring (CGM) are evolving quickly, and the data obtained are expected to become the basis for clinical decisions for many patients with diabetes in the near future. However, this requires that their analytical accuracy is sufficient. This accuracy is usually determined with clinical studies by comparing the data obtained by the given CGM system with blood glucose (BG) point measurements made with a so-called reference method. The latter is assumed to indicate the correct value of the target quantity. Unfortunately, due to the nature of the clinical trials and the approach used, such a comparison is subject to several effects which may lead to misleading results. While some reasons for the differences between the values obtained with CGM and BG point measurements are relatively well-known (e.g., measurement in different body compartments), others related to the clinical study protocols are less visible, but also quite important. In this review, we present a general picture of the topic as well as tools which allow to correct or at least to estimate the uncertainty of measures of CGM system performance.
Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse
2013-05-14
We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Incident type 2 diabetes, hypertension and comorbidity. Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign 'high' risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned 'low' risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case-control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait.
Image-enhanced endoscopy with I-scan technology for the evaluation of duodenal villous patterns.
Cammarota, Giovanni; Ianiro, Gianluca; Sparano, Lucia; La Mura, Rossella; Ricci, Riccardo; Larocca, Luigi M; Landolfi, Raffaele; Gasbarrini, Antonio
2013-05-01
I-scan technology is the newly developed endoscopic tool that works in real time and utilizes a digital contrast method to enhance endoscopic image. We performed a feasibility study aimed to determine the diagnostic accuracy of i-scan technology for the evaluation of duodenal villous patterns, having histology as the reference standard. In this prospective, single center, open study, patients undergoing upper endoscopy for an histological evaluation of duodenal mucosa were enrolled. All patients underwent upper endoscopy using high resolution view in association with i-scan technology. During endoscopy, duodenal villous patterns were evaluated and classified as normal, partial villous atrophy, or marked villous atrophy. Results were then compared with histology. One hundred fifteen subjects were recruited in this study. The endoscopist was able to find marked villous atrophy of the duodenum in 12 subjects, partial villous atrophy in 25, and normal villi in the remaining 78 individuals. The i-scan system was demonstrated to have great accuracy (100 %) in the detection of marked villous atrophy patterns. I-scan technology showed quite lower accuracy in determining partial villous atrophy or normal villous patterns (respectively, 90 % for both items). Image-enhancing endoscopic technology allows a clear visualization of villous patterns in the duodenum. By switching from the standard to the i-scan view, it is possible to optimize the accuracy of endoscopy in recognizing villous alteration in subjects undergoing endoscopic evaluation.
Accuracy of Pediatric Trauma Field Triage: A Systematic Review.
van der Sluijs, Rogier; van Rein, Eveline A J; Wijnand, Joep G J; Leenen, Luke P H; van Heijl, Mark
2018-05-16
Field triage of pediatric patients with trauma is critical for transporting the right patient to the right hospital. Mortality and lifelong disabilities are potentially attributable to erroneously transporting a patient in need of specialized care to a lower-level trauma center. To quantify the accuracy of field triage and associated diagnostic protocols used to identify children in need of specialized trauma care. MEDLINE, Embase, PsycINFO, and Cochrane Register of Controlled Trials were searched from database inception to November 6, 2017, for studies describing the accuracy of diagnostic tests to identify children in need of specialized trauma care in a prehospital setting. Identified articles with a study population including patients not transported by emergency medical services were excluded. Quality assessment was performed using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-2. After deduplication, 1430 relevant articles were assessed, a full-text review of 38 articles was conducted, and 5 of those articles were included. All studies were observational, published between 1996 and 2017, and conducted in the United States, and data collection was prospective in 1 study. Three different protocols were studied that analyzed a combined total of 1222 children in need of specialized trauma care. One protocol was specifically developed for a pediatric out-of-hospital cohort. The percentage of pediatric patients requiring specialized trauma care in each study varied between 2.6% (110 of 4197) and 54.7% (58 of 106). The sensitivity of the prehospital triage tools ranged from 49.1% to 87.3%, and the specificity ranged from 41.7% to 84.8%. No prehospital triage protocol alone complied with the international standard of 95% or greater sensitivity. Undertriage and overtriage rates, representative of the quality of the full diagnostic strategy to transport a patient to the right hospital, were not reported for inclusive trauma systems or emergency medical services regions. It is crucial to transport the right patient to the right hospital. Yet the quality of the full diagnostic strategy to determine the optimal receiving hospital is unknown. None of the investigated field triage protocols complied with current sensitivity targets. Improved efforts are needed to develop accurate child-specific tools to prevent undertriage and its potential life-threatening consequences.
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
Rabito, Estela Iraci; Marcadenti, Aline; da Silva Fink, Jaqueline; Figueira, Luciane; Silva, Flávia Moraes
2017-08-01
There is an international consensus that nutrition screening be performed at the hospital; however, there is no "best tool" for screening of malnutrition risk in hospitalized patients. To evaluate (1) the accuracy of the MUST (Malnutrition Universal Screening Tool), MST (Malnutrition Screening Tool), and SNAQ (Short Nutritional Assessment Questionnaire) in comparison with the NRS-2002 (Nutritional Risk Screening 2002) to identify patients at risk of malnutrition and (2) the ability of these nutrition screening tools to predict morbidity and mortality. A specific questionnaire was administered to complete the 4 screening tools. Outcomes measures included length of hospital stay, transfer to the intensive care unit, presence of infection, and incidence of death. A total of 752 patients were included. The nutrition risk was 29.3%, 37.1%, 33.6%, and 31.3% according to the NRS-2002, MUST, MST, and SNAQ, respectively. All screening tools showed satisfactory performance to identify patients at nutrition risk (area under the receiver operating characteristic curve between 0.765-0.808). Patients at nutrition risk showed higher risk of very long length of hospital stay as compared with those not at nutrition risk, independent of the tool applied (relative risk, 1.35-1.78). Increased risk of mortality (2.34 times) was detected by the MUST. The MUST, MST, and SNAQ share similar accuracy to the NRS-2002 in identifying risk of malnutrition, and all instruments were positively associated with very long hospital stay. In clinical practice, the 4 tools could be applied, and the choice for one of them should be made per the particularities of the service.
Harrison, Jennifer K; Fearon, Patricia; Noel-Storr, Anna H; McShane, Rupert; Stott, David J; Quinn, Terry J
2015-03-10
The diagnosis of dementia relies on the presence of new-onset cognitive impairment affecting an individual's functioning and activities of daily living. The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) is a questionnaire instrument, completed by a suitable 'informant' who knows the patient well, designed to assess change in functional performance secondary to cognitive change; it is used as a tool to identifying those who may have dementia.In secondary care there are two specific instances where patients may be assessed for the presence of dementia. These are in the general acute hospital setting, where opportunistic screening may be undertaken, or in specialist memory services where individuals have been referred due to perceived cognitive problems. To ensure an instrument is suitable for diagnostic use in these settings, its test accuracy must be established. To determine the diagnostic accuracy of the informant-based questionnaire IQCODE, for detection of all-cause (undifferentiated) dementia in adults presenting to secondary-care services. We searched the following sources on the 28th of January 2013: ALOIS (Cochrane Dementia and Cognitive Improvement Group), MEDLINE (Ovid SP), EMBASE (Ovid SP), PsycINFO (Ovid SP), BIOSIS Previews (Thomson Reuters Web of Science), Web of Science Core Collection (includes Conference Proceedings Citation Index) (Thomson Reuters Web of Science), CINAHL (EBSCOhost) and LILACS (BIREME). We also searched sources specific to diagnostic test accuracy: MEDION (Universities of Maastricht and Leuven); DARE (Database of Abstracts of Reviews of Effects - via the Cochrane Library); HTA Database (Health Technology Assessment Database via the Cochrane Library) and ARIF (Birmingham University). We also checked reference lists of relevant studies and reviews, used searches of known relevant studies in PubMed to track related articles, and contacted research groups conducting work on IQCODE for dementia diagnosis to try to find additional studies. We developed a sensitive search strategy; search terms were designed to cover key concepts using several different approaches run in parallel and included terms relating to cognitive tests, cognitive screening and dementia. We used standardised database subject headings such as MeSH terms (in MEDLINE) and other standardised headings (controlled vocabulary) in other databases, as appropriate. We selected those studies performed in secondary-care settings, which included (not necessarily exclusively) IQCODE to assess for the presence of dementia and where dementia diagnosis was confirmed with clinical assessment. For the 'secondary care' setting we included all studies which assessed patients in hospital (e.g. acute unscheduled admissions, referrals to specialist geriatric assessment services etc.) and those referred for specialist 'memory' assessment, typically in psychogeriatric services. We screened all titles generated by electronic database searches, and reviewed abstracts of all potentially relevant studies. Two independent assessors checked full papers for eligibility and extracted data. We determined quality assessment (risk of bias and applicability) using the QUADAS-2 tool, and reporting quality using the STARD tool. From 72 papers describing IQCODE test accuracy, we included 13 papers, representing data from 2745 individuals (n = 1413 (51%) with dementia). Pooled analysis of all studies using data presented closest to a cut-off of 3.3 indicated that sensitivity was 0.91 (95% CI 0.86 to 0.94); specificity 0.66 (95% CI 0.56 to 0.75); the positive likelihood ratio was 2.7 (95% CI 2.0 to 3.6) and the negative likelihood ratio was 0.14 (95% CI 0.09 to 0.22).There was a statistically significant difference in test accuracy between the general hospital setting and the specialist memory setting (P = 0.019), suggesting that IQCODE performs better in a 'general' setting.We found no significant differences in the test accuracy of the short (16-item) versus the 26-item IQCODE, or in the language of administration.There was significant heterogeneity in the included studies, including a highly varied prevalence of dementia (10.5% to 87.4%). Across the included papers there was substantial potential for bias, particularly around sampling of included participants and selection criteria, which may limit generalisability. There was also evidence of suboptimal reporting, particularly around disease severity and handling indeterminate results, which are important if considering use in clinical practice. The IQCODE can be used to identify older adults in the general hospital setting who are at risk of dementia and require specialist assessment; it is useful specifically for ruling out those without evidence of cognitive decline. The language of administration did not affect test accuracy, which supports the cross-cultural use of the tool. These findings are qualified by the significant heterogeneity, the potential for bias and suboptimal reporting found in the included studies.
NASA Astrophysics Data System (ADS)
Kitsakis, K.; Alabey, P.; Kechagias, J.; Vaxevanidis, N.
2016-11-01
Low cost 3D printing' is a terminology that referred to the fused filament fabrication (FFF) technique, which constructs physical prototypes, by depositing material layer by layer using a thermal nozzle head. Nowadays, 3D printing is widely used in medical applications such as tissue engineering as well as supporting tool in diagnosis and treatment in Neurosurgery, Orthopedic and Dental-Cranio-Maxillo-Facial surgery. 3D CAD medical models are usually obtained by MRI or CT scans and then are sent to a 3D printer for physical model creation. The present paper is focused on a brief overview of benefits and limitations of 3D printing applications in the field of medicine as well as on a dimensional accuracy study of low-cost 3D printing technique.
Performance evaluation of Titanium nitride coated tool in turning of mild steel
NASA Astrophysics Data System (ADS)
Srinivas, B.; Pramod Kumar, G.; Cheepu, Muralimohan; Jagadeesh, N.; kumar, K. Ravi; Haribabu, S.
2018-03-01
The growth in demand for bio-gradable materials is opened as a venue for using vegetable oils, coconut oils etc., as alternate to the conventional coolants for machining operations. At present in manufacturing industries the demand for surface quality is increasing rapidly along with dimensional accuracy and geometric tolerances. The present study is influence of cutting parameters on the surface roughness during the turning of mild steel with TiN coated carbide tool using groundnut oil and soluble oil as coolants. The results showed vegetable gave closer surface finish compares with soluble oil. Cutting parameters has been optimized with Taguchi technique. In this paper, the main objective is to optimize the cutting parameters and reduce surface roughness analogous to increase the tool life by apply the coating on the carbide inserts. The cost of the coating is more, but economically efficient than changing the tools frequently. The plots were generated and analysed to find the relationship between them which are confirmed by performing a comparison study between the predicted results and theoretical results.
Laudato, Pietro Aniello; Pierzchala, Katarzyna; Schizas, Constantin
2018-03-15
A retrospective radiological study. The aim of this study was to evaluate the accuracy of pedicle screw insertion using O-Arm navigation, robotic assistance, or a freehand fluoroscopic technique. Pedicle screw insertion using either "O-Arm" navigation or robotic devices is gaining popularity. Although several studies are available evaluating each of those techniques separately, no direct comparison has been attempted. Eighty-four patients undergoing implantation of 569 lumbar and thoracic screws were divided into three groups. Eleven patients (64 screws) had screws inserted using robotic assistance, 25 patients (191 screws) using the O-arm, while 48 patients (314 screws) had screws inserted using lateral fluoroscopy in a freehand technique. A single experienced spine surgeon assisted by a spinal fellow performed all procedures. Screw placement accuracy was assessed by two independent observers on postoperative computed tomography (CTs) according to the A to D Rampersaud criteria. No statistically significant difference was noted between the three groups. About 70.4% of screws in the freehand group, 69.6% in the O arm group, and 78.8% in the robotic group were placed completely within the pedicle margins (grade A) (P > 0.05). About 6.4% of screws were considered misplaced (grades C&D) in the freehand group, 4.2% in the O-arm group, and 4.7% in the robotic group (P > 0.05). The spinal fellow inserted screws with the same accuracy as the senior surgeon (P > 0.05). The advent of new technologies does not appear to alter accuracy of screw placement in our setting. Under supervision, spinal fellows might perform equally well to experienced surgeons using new tools. The lack of difference in accuracy does not imply that the above-mentioned techniques have no added advantages. Other issues, such as surgeon/patient radiation, fiddle factor, teaching suitability, etc., outside the scope of our present study, need further assessment. 3.
Accuracy of Urine Color to Detect Equal to or Greater Than 2% Body Mass Loss in Men.
McKenzie, Amy L; Muñoz, Colleen X; Armstrong, Lawrence E
2015-12-01
Clinicians and athletes can benefit from field-expedient measurement tools, such as urine color, to assess hydration state; however, the diagnostic efficacy of this tool has not been established. To determine the diagnostic accuracy of urine color assessment to distinguish a hypohydrated state (≥2% body mass loss [BML]) from a euhydrated state (<2% BML) after exercise in a hot environment. Controlled laboratory study. Environmental chamber in a laboratory. Twenty-two healthy men (age = 22 ± 3 years, height = 180.4 ± 8.7 cm, mass = 77.9 ± 12.8 kg, body fat = 10.6% ± 4.6%). Participants cycled at 68% ± 6% of their maximal heart rates in a hot environment (36°C ± 1°C) for 5 hours or until 5% BML was achieved. At the point of each 1% BML, we assessed urine color. Diagnostic efficacy of urine color was assessed using receiver operating characteristic curve analysis, sensitivity, specificity, and likelihood ratios. Urine color was useful as a diagnostic tool to identify hypohydration after exercise in the heat (area under the curve = 0.951, standard error = 0.022; P < .001). A urine color of 5 or greater identified BML ≥2% with 88.9% sensitivity and 84.8% specificity (positive likelihood ratio = 5.87, negative likelihood ratio = 0.13). Under the conditions of acute dehydration due to exercise in a hot environment, urine color assessment can be a valid, practical, inexpensive tool for assessing hydration status. Researchers should examine the utility of urine color to identify a hypohydrated state under different BML conditions.
Rossi, Esther Diana; Fadda, Guido; Schmitt, Fernando
2014-01-01
Thyroid nodules are a common finding in the general population, including both nonneoplastic and neoplastic entities. Fine-needle aspiration cytology (FNAC) is the first tool for evaluating thyroid nodules. In spite of its high diagnostic accuracy, 25% of nodules result in the category of follicular neoplasms (FN), with varying risk of malignancy and different management strategies. The use of ancillary techniques is reshaping the practice of FNAC. These tools can significantly empower the morphological diagnosis and prognosis of thyroid nodules, allowing a more accurate prediction of the nature of the lesion. Several studies have underlined the role of single or multiple testing for the category of FN as strong indicators of cancer. Every cytological preparation can be used for the application of ancillary techniques but the introduction of liquid-based cytology (LBC) might facilitate the application. Our experience involving an immunocytochemical panel made up of HBME-1 and galectin-3 pointed to an 81% overall diagnostic accuracy in discriminating between low and high risk of malignancy in FN. The application of these techniques on LBC represents an adjunct to the morphological evaluation of FN. They represent a critical and challenging, but also a feasible, tool in the preoperative diagnoses, allowing specific prognostic and predictive details regardless of the cytological preparation. © 2014 S. Karger AG, Basel.
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
A visual training tool for the Photoload sampling technique
Violet J. Holley; Robert E. Keane
2010-01-01
This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...
USDA-ARS?s Scientific Manuscript database
Using multiple historical satellite surface soil moisture products, the Kalman Filtering-based Soil Moisture Analysis Rainfall Tool (SMART) is applied to improve the accuracy of a multi-decadal global daily rainfall product that has been bias-corrected to match the monthly totals of available rain g...
ERIC Educational Resources Information Center
Klein, P.; Hirth, M.; Gröber, S.; Kuhn, J.; Müller, A.
2014-01-01
Smartphones and tablets are used as experimental tools and for quantitative measurements in two traditional laboratory experiments for undergraduate physics courses. The Doppler effect is analyzed and the speed of sound is determined with an accuracy of about 5% using ultrasonic frequency and two smartphones, which serve as rotating sound emitter…
A Constitutive Relationship between Crack Propagation and Specific Damping Capacity in Steel
1990-10-01
diagnostic tool for detecting crack growth in structures. The model must be simple to act as a tool, but it must be comprehensive to provide accuracy...strain for static fracture u ECritical strain above which plastic strain occursP EAverage value of the cyclic plastic-strain rangeP E t ln(Ao/AI), true
Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.
Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to electromagnetically sensitive spacecraft. This study employs the multilevel fast multipole method (MLFMM) from a commercial electromagnetic tool, FEKO, to model the fairing electromagnetic environment in the presence of an internal transmitter with improved accuracy over industry applied techniques. This fairing model includes material properties representative of acoustic blanketing commonly used in vehicles. Equivalent surface material models within FEKO were successfully applied to simulate the test case. Finally, a simplified model is presented using Nicholson Ross Weir derived blanket material properties. These properties are implemented with the coated metal option to reduce the model to one layer within the accuracy of the original three layer simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Eric J
The ResStock analysis tool is helping states, municipalities, utilities, and manufacturers identify which home upgrades save the most energy and money. Across the country there's a vast diversity in the age, size, construction practices, installed equipment, appliances, and resident behavior of the housing stock, not to mention the range of climates. These variations have hindered the accuracy of predicting savings for existing homes. Researchers at the National Renewable Energy Laboratory (NREL) developed ResStock. It's a versatile tool that takes a new approach to large-scale residential energy analysis by combining: large public and private data sources, statistical sampling, detailed subhourly buildingmore » simulations, high-performance computing. This combination achieves unprecedented granularity and most importantly - accuracy - in modeling the diversity of the single-family housing stock.« less
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use. PMID:26930204
Screening for bipolar spectrum disorders: A comprehensive meta-analysis of accuracy studies.
Carvalho, André F; Takwoingi, Yemisi; Sales, Paulo Marcelo G; Soczynska, Joanna K; Köhler, Cristiano A; Freitas, Thiago H; Quevedo, João; Hyphantis, Thomas N; McIntyre, Roger S; Vieta, Eduard
2015-02-01
Bipolar spectrum disorders are frequently under-recognized and/or misdiagnosed in various settings. Several influential publications recommend the routine screening of bipolar disorder. A systematic review and meta-analysis of accuracy studies for the bipolar spectrum diagnostic scale (BSDS), the hypomania checklist (HCL-32) and the mood disorder questionnaire (MDQ) were performed. The Pubmed, EMBASE, Cochrane, PsycINFO and SCOPUS databases were searched. Studies were included if the accuracy properties of the screening measures were determined against a DSM or ICD-10 structured diagnostic interview. The QUADAS-2 tool was used to rate bias. Fifty three original studies met inclusion criteria (N=21,542). At recommended cutoffs, summary sensitivities were 81%, 66% and 69%, while specificities were 67%, 79% and 86% for the HCL-32, MDQ, and BSDS in psychiatric services, respectively. The HCL-32 was more accurate than the MDQ for the detection of type II bipolar disorder in mental health care centers (P=0.018). At a cutoff of 7, the MDQ had a summary sensitivity of 43% and a summary specificity of 95% for detection of bipolar disorder in primary care or general population settings. Most studies were performed in mental health care settings. Several included studies had a high risk of bias. Although accuracy properties of the three screening instruments did not consistently differ in mental health care services, the HCL-32 was more accurate than the MDQ for the detection of type II BD. More studies in other settings (for example, in primary care) are necessary. Copyright © 2014 Elsevier B.V. All rights reserved.
Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations
Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.
2018-01-01
Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652
Mitra, Sumit; Kumar, Mohan; Sharma, Vivek; Mukhopadhyay, Debasis
2010-01-01
Background: Intraoperative cytology is an important diagnostic modality improving on the accuracy of the frozen sections. It has shown to play an important role especially in the intraoperative diagnosis of central nervous system tumors. Aim: To study the diagnostic accuracy of squash preparation and frozen section (FS) in the intraoperative diagnosis of central nervous system (CNS) tumors. Materials and Methods: This prospective study of 114 patients with CNS tumors was conducted over a period of 18 months (September 2004 to February 2006). The cytological preparations were stained by the quick Papanicolaou method. The squash interpretation and FS diagnosis were later compared with the paraffin section diagnosis. Results: Of the 114 patients, cytological diagnosis was offered in 96 cases. Eighteen nonneoplastic or noncontributory cases were excluded. Using hematoxylin and eosin-stained histopathology sections as the gold standard, the diagnostic accuracy of cytology was 88.5% (85/96) and the accuracy on FS diagnosis was 90.6% (87/96). Among these cases, gliomas formed the largest category of tumors (55.2%). The cytological accuracy in this group was 84.9% (45/53) and the comparative FS figure was 86.8% (46/53). In cases where the smear and the FS diagnosis did not match, the latter opinion was offered. Conclusions: Squash preparation is a reliable, rapid and easy method and can be used as a complement to FS in the intraoperative diagnosis of CNS tumors. PMID:21187881
FlavonoidSearch: A system for comprehensive flavonoid annotation by mass spectrometry.
Akimoto, Nayumi; Ara, Takeshi; Nakajima, Daisuke; Suda, Kunihiro; Ikeda, Chiaki; Takahashi, Shingo; Muneto, Reiko; Yamada, Manabu; Suzuki, Hideyuki; Shibata, Daisuke; Sakurai, Nozomu
2017-04-28
Currently, in mass spectrometry-based metabolomics, limited reference mass spectra are available for flavonoid identification. In the present study, a database of probable mass fragments for 6,867 known flavonoids (FsDatabase) was manually constructed based on new structure- and fragmentation-related rules using new heuristics to overcome flavonoid complexity. We developed the FlavonoidSearch system for flavonoid annotation, which consists of the FsDatabase and a computational tool (FsTool) to automatically search the FsDatabase using the mass spectra of metabolite peaks as queries. This system showed the highest identification accuracy for the flavonoid aglycone when compared to existing tools and revealed accurate discrimination between the flavonoid aglycone and other compounds. Sixteen new flavonoids were found from parsley, and the diversity of the flavonoid aglycone among different fruits and vegetables was investigated.
Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J
2017-05-16
Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.
Yi, Ming; Zhao, Yongmei; Jia, Li; He, Mei; Kebebew, Electron; Stephens, Robert M.
2014-01-01
To apply exome-seq-derived variants in the clinical setting, there is an urgent need to identify the best variant caller(s) from a large collection of available options. We have used an Illumina exome-seq dataset as a benchmark, with two validation scenarios—family pedigree information and SNP array data for the same samples, permitting global high-throughput cross-validation, to evaluate the quality of SNP calls derived from several popular variant discovery tools from both the open-source and commercial communities using a set of designated quality metrics. To the best of our knowledge, this is the first large-scale performance comparison of exome-seq variant discovery tools using high-throughput validation with both Mendelian inheritance checking and SNP array data, which allows us to gain insights into the accuracy of SNP calling through such high-throughput validation in an unprecedented way, whereas the previously reported comparison studies have only assessed concordance of these tools without directly assessing the quality of the derived SNPs. More importantly, the main purpose of our study was to establish a reusable procedure that applies high-throughput validation to compare the quality of SNP discovery tools with a focus on exome-seq, which can be used to compare any forthcoming tool(s) of interest. PMID:24831545
On the evaluation of segmentation editing tools
Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.
2014-01-01
Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063
Predicting risk and outcomes for frail older adults: an umbrella review of frailty screening tools
Apóstolo, João; Cooke, Richard; Bobrowicz-Campos, Elzbieta; Santana, Silvina; Marcucci, Maura; Cano, Antonio; Vollenbroek-Hutten, Miriam; Germini, Federico; Holland, Carol
2017-01-01
EXECUTIVE SUMMARY Background A scoping search identified systematic reviews on diagnostic accuracy and predictive ability of frailty measures in older adults. In most cases, research was confined to specific assessment measures related to a specific clinical model. Objectives To summarize the best available evidence from systematic reviews in relation to reliability, validity, diagnostic accuracy and predictive ability of frailty measures in older adults. Inclusion criteria Population Older adults aged 60 years or older recruited from community, primary care, long-term residential care and hospitals. Index test Available frailty measures in older adults. Reference test Cardiovascular Health Study phenotype model, the Canadian Study of Health and Aging cumulative deficit model, Comprehensive Geriatric Assessment or other reference tests. Diagnosis of interest Frailty defined as an age-related state of decreased physiological reserves characterized by an increased risk of poor clinical outcomes. Types of studies Quantitative systematic reviews. Search strategy A three-step search strategy was utilized to find systematic reviews, available in English, published between January 2001 and October 2015. Methodological quality Assessed by two independent reviewers using the Joanna Briggs Institute critical appraisal checklist for systematic reviews and research synthesis. Data extraction Two independent reviewers extracted data using the standardized data extraction tool designed for umbrella reviews. Data synthesis Data were only presented in a narrative form due to the heterogeneity of included reviews. Results Five reviews with a total of 227,381 participants were included in this umbrella review. Two reviews focused on reliability, validity and diagnostic accuracy; two examined predictive ability for adverse health outcomes; and one investigated validity, diagnostic accuracy and predictive ability. In total, 26 questionnaires and brief assessments and eight frailty indicators were analyzed, most of which were applied to community-dwelling older people. The Frailty Index was examined in almost all these dimensions, with the exception of reliability, and its diagnostic and predictive characteristics were shown to be satisfactory. Gait speed showed high sensitivity, but only moderate specificity, and excellent predictive ability for future disability in activities of daily living. The Tilburg Frailty Indicator was shown to be a reliable and valid measure for frailty screening, but its diagnostic accuracy was not evaluated. Screening Letter, Timed-up-and-go test and PRISMA 7 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) demonstrated high sensitivity and moderate specificity for identifying frailty. In general, low physical activity, variously measured, was one of the most powerful predictors of future decline in activities of daily living. Conclusion Only a few frailty measures seem to be demonstrably valid, reliable and diagnostically accurate, and have good predictive ability. Among them, the Frailty Index and gait speed emerged as the most useful in routine care and community settings. However, none of the included systematic reviews provided responses that met all of our research questions on their own and there is a need for studies that could fill this gap, covering all these issues within the same study. Nevertheless, it was clear that no suitable tool for assessing frailty appropriately in emergency departments was identified. PMID:28398987
Aeroelastic Optimization Study Based on X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley; Pak, Chan-Gi
2014-01-01
A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.
Fitzpatrick, Megan J; Mathewson, Paul D; Porter, Warren P
2015-01-01
Mechanistic models provide a powerful, minimally invasive tool for gaining a deeper understanding of the ecology of animals across geographic space and time. In this paper, we modified and validated the accuracy of the mechanistic model Niche Mapper for simulating heat exchanges of animals with counter-current heat exchange mechanisms in their legs and animals that wade in water. We then used Niche Mapper to explore the effects of wading and counter-current heat exchange on the energy expenditures of Whooping Cranes, a long-legged wading bird. We validated model accuracy against the energy expenditure of two captive Whooping Cranes measured using the doubly-labeled water method and time energy budgets. Energy expenditure values modeled by Niche Mapper were similar to values measured by the doubly-labeled water method and values estimated from time-energy budgets. Future studies will be able to use Niche Mapper as a non-invasive tool to explore energy-based limits to the fundamental niche of Whooping Cranes and apply this knowledge to management decisions. Basic questions about the importance of counter-current exchange and wading to animal physiological tolerances can also now be explored with the model.
Fitzpatrick, Megan J.; Mathewson, Paul D.; Porter, Warren P.
2015-01-01
Mechanistic models provide a powerful, minimally invasive tool for gaining a deeper understanding of the ecology of animals across geographic space and time. In this paper, we modified and validated the accuracy of the mechanistic model Niche Mapper for simulating heat exchanges of animals with counter-current heat exchange mechanisms in their legs and animals that wade in water. We then used Niche Mapper to explore the effects of wading and counter-current heat exchange on the energy expenditures of Whooping Cranes, a long-legged wading bird. We validated model accuracy against the energy expenditure of two captive Whooping Cranes measured using the doubly-labeled water method and time energy budgets. Energy expenditure values modeled by Niche Mapper were similar to values measured by the doubly-labeled water method and values estimated from time-energy budgets. Future studies will be able to use Niche Mapper as a non-invasive tool to explore energy-based limits to the fundamental niche of Whooping Cranes and apply this knowledge to management decisions. Basic questions about the importance of counter-current exchange and wading to animal physiological tolerances can also now be explored with the model. PMID:26308207
Using the Red/Yellow/Green Discharge Tool to Improve the Timeliness of Hospital Discharges
Mathews, Kusum S.; Corso, Philip; Bacon, Sandra; Jenq, Grace Y.
2015-01-01
Background As part of Yale-New Haven Hospital (Connecticut)’s Safe Patient Flow Initiative, the physician leadership developed the Red/Yellow/Green (RYG) Discharge Tool, an electronic medical record–based prompt to identify likelihood of patients’ next-day discharge: green (very likely), yellow (possibly), and red (unlikely). The tool’s purpose was to enhance communication with nursing/care coordination and trigger earlier discharge steps for patients identified as “green” or “yellow”. Methods Data on discharge assignments, discharge dates/times, and team designation were collected for all adult medicine patients discharged from October – December 2009 (Study Period 1) and October – December 2011 (Study Period 2), between which the tool’s placement changed from the sign-out note to the daily progress note. Results In Study Period 1, 75.9% of the patients had discharge assignments, compared with 90.8% in Period 2 (p < .001). The overall 11 A.M. discharge rate improved from 10.4% to 21.2% from 2007 to 2011. “Green” patients were more likely to be discharged before 11 A.M. than “yellow” or “red” patients (p < .001). Patients with RYG assignments discharged by 11 A.M. had a lower length of stay than those without assignments and did not have an associated increased risk of readmission. Discharge prediction accuracy worsened after the change in placement, decreasing from 75.1% to 59.1% for “green” patients (p < .001), and from 34.5% to 29.2% (p < .001) for “yellow” patients. In both periods, hospitalists were more accurate than housestaff in discharge predictions, suggesting that education and/or experience may contribute to discharge assignment. Conclusions The RYG Discharge Tool helped facilitate earlier discharges, but accuracy depends on placement in daily work flow and experience. PMID:25016672
Virtual chromoendoscopy can be a useful software tool in capsule endoscopy.
Duque, Gabriela; Almeida, Nuno; Figueiredo, Pedro; Monsanto, Pedro; Lopes, Sandra; Freire, Paulo; Ferreira, Manuela; Carvalho, Rita; Gouveia, Hermano; Sofia, Carlos
2012-05-01
capsule endoscopy (CE) has revolutionized the study of small bowel. One major drawback of this technique is that we cannot interfere with image acquisition process. Therefore, the development of new software tools that could modify the images and increase both detection and diagnosis of small-bowel lesions would be very useful. The Flexible Spectral Imaging Color Enhancement (FICE) that allows for virtual chromoendoscopy is one of these software tools. to evaluate the reproducibility and diagnostic accuracy of the FICE system in CE. this prospective study involved 20 patients. First, four physicians interpreted 150 static FICE images and the overall agreement between them was determined using the Fleiss Kappa Test. Second, two experienced gastroenterologists, blinded to each other results, analyzed the complete 20 video streams. One interpreted conventional capsule videos and the other, the CE-FICE videos at setting 2. All findings were reported, regardless of their clinical value. Non-concordant findings between both interpretations were analyzed by a consensus panel of four gastroenterologists who reached a final result (positive or negative finding). in the first arm of the study the overall concordance between the four gastroenterologists was substantial (0.650). In the second arm, the conventional mode identified 75 findings and the CE-FICE mode 95. The CE-FICE mode did not miss any lesions identified by the conventional mode and allowed the identification of a higher number of angiodysplasias (35 vs 32), and erosions (41 vs. 24). there is reproducibility for the interpretation of CE-FICE images between different observers experienced in conventional CE. The use of virtual chromoendoscopy in CE seems to increase its diagnostic accuracy by highlighting small bowel erosions and angiodysplasias that weren´t identified by the conventional mode.
Rajeev, Aysha; Tuinebreijer, Wim; Mohamed, Abdalla; Newby, Mike
2018-01-01
The assessment of a patient with chronic hip pain can be challenging. The differential diagnosis of intra-articular pathology causing hip pain can be diverse. These includes conditions such as osteoarthritis, fracture, and avascular necrosis, synovitis, loose bodies, labral tears, articular pathology and, femoro-acetabular impingement. Magnetic resonance imaging (MRI) arthrography of the hip has been widely used now for diagnosis of articular pathology of the hip. A retrospective analysis of 113 patients who had MRI arthrogram and who underwent hip arthroscopy was included in the study. The MRI arthrogram was performed using gadolinium injection and reported by a single radiologist. The findings were then compared to that found on arthroscopy. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy and 95% confidence interval were calculated for each pathology. Labral tear-sensitivity 84% (74.3-90.5), specificity 64% (40.7-82.8), PPV 91% (82.1-95.8), NPV 48% (29.5-67.5), accuracy 80%. Delamination -sensitivity 7% (0.8-22.1), specificity 98% (91.6-99.7), PPV 50% (6.8-93.2), NPV 74% (65.1-82.2) and accuracy 39%. Chondral changes-sensitivity 25% (13.3-38.9), specificity 83% (71.3-91.1), PPV 52% (30.6-73.2), NPV 59% (48.0-69.2) and accuracy 58%. Femoro-acetabular impingement (CAM deformity)-sensitivity 34% (19.6-51.4), specificity 83% (72.2-90.4), PPV 50% (29.9-70.1), NPV 71% (60.6-80.5) and accuracy 66%. Synovitis-sensitivity 11% (2.3-28.2), specificity 99% (93.6-100), PPV 75% (19.4-99.4), NPV 77% (68.1-84.6) and accuracy 77%. Our study conclusions are MRI arthrogram is a useful investigation tool in detecting labral tears, it is also helpful in the diagnosis of femoro-acetabular impingement. However, when it comes to the diagnosis of chondral changes, defects and cartilage delamination, the sensitivity and accuracy are low.
Iftikhar, Imran H; Alghothani, Lana; Sardi, Alejandro; Berkowitz, David; Musani, Ali I
2017-07-01
Transbronchial lung cryobiopsy is increasingly being used for the assessment of diffuse parenchymal lung diseases. Several studies have shown larger biopsy samples and higher yields compared with conventional transbronchial biopsies. However, the higher risk of bleeding and other complications has raised concerns for widespread use of this modality. To study the diagnostic accuracy and safety profile of transbronchial lung cryobiopsy and compare with video-assisted thoracoscopic surgery (VATS) by reviewing available evidence from the literature. Medline and PubMed were searched from inception until December 2016. Data on diagnostic performance were abstracted by constructing two-by-two contingency tables for each study. Data on a priori selected safety outcomes were collected. Risk of bias was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool. Random effects meta-analyses were performed to obtain summary estimates of the diagnostic accuracy. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of transbronchial lung cryobiopsy were 83.7% (76.9-88.8%), 87% (85-89%), and 57% (40-73%), respectively. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of VATS were 92.7% (87.6-95.8%), 91.0% (89-92%), and 58% (31-81%), respectively. The incidence of grade 2 (moderate to severe) endobronchial bleeding after transbronchial lung cryobiopsy and of post-procedural pneumothorax was 4.9% (2.2-10.7%) and 9.5% (5.9-14.9%), respectively. Although the diagnostic test accuracy measures of transbronchial lung cryobiopsy lag behind those of VATS, with an acceptable safety profile and potential cost savings, the former could be considered as an alternative in the evaluation of patients with diffuse parenchymal lung diseases.
Hand-Held Electronic Gap-Measuring Tools
NASA Technical Reports Server (NTRS)
Sugg, F. E.; Thompson, F. W.; Aragon, L. A.; Harrington, D. B.
1985-01-01
Repetitive measurements simplified by tool based on LVDT operation. With fingers in open position, Gap-measuring tool rests on digital readout instrument. With fingers inserted in gap, separation alters inductance of linear variable-differential transformer in plastic handle. Originally developed for measuring gaps between surface tiles of Space Shuttle orbiter, tool reduces measurement time from 20 minutes per tile to 2 minutes. Also reduces possibility of damage to tiles during measurement. Tool has potential applications in mass production; helps ensure proper gap dimensions in assembly of refrigerator and car doors and also used to measure dimensions of components and to verify positional accuracy of components during progressive assembly operations.
Small scale sequence automation pays big dividends
NASA Technical Reports Server (NTRS)
Nelson, Bill
1994-01-01
Galileo sequence design and integration are supported by a suite of formal software tools. Sequence review, however, is largely a manual process with reviewers scanning hundreds of pages of cryptic computer printouts to verify sequence correctness. Beginning in 1990, a series of small, PC based sequence review tools evolved. Each tool performs a specific task but all have a common 'look and feel'. The narrow focus of each tool means simpler operation, and easier creation, testing, and maintenance. Benefits from these tools are (1) decreased review time by factors of 5 to 20 or more with a concomitant reduction in staffing, (2) increased review accuracy, and (3) excellent returns on time invested.
Wind Prediction Accuracy for Air Traffic Management Decision Support Tools
NASA Technical Reports Server (NTRS)
Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan
2000-01-01
The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an optimal interpolation of the latest ACARS reports since the RUC run. This paper presents an overview of the study's results including the identification and use of new large mor wind-prediction accuracy metrics that are key to ATM DST performance.
Face and construct validity of a computer-based virtual reality simulator for ERCP.
Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V
2010-02-01
Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.
Mapping invasive aquatic vegetation in the Sacramento-San Joaquin Delta using hyperspectral imagery.
Underwood, E C; Mulitsch, M J; Greenberg, J A; Whiting, M L; Ustin, S L; Kefauver, S C
2006-10-01
The ecological and economic impacts associated with invasive species are of critical concern to land managers. The ability to map the extent and severity of invasions would be a valuable contribution to management decisions relating to control and monitoring efforts. We investigated the use of hyperspectral imagery for mapping invasive aquatic plant species in the Sacramento-San Joaquin Delta in the Central Valley of California, at two spatial scales. Sixty-four flightlines of HyMap hyperspectral imagery were acquired over the study region covering an area of 2,139 km(2) and field work was conducted to acquire GPS locations of target invasive species. We used spectral mixture analysis to classify two target invasive species; Brazilian waterweed (Egeria densa), a submerged invasive, and water hyacinth (Eichhornia crassipes), a floating emergent invasive. At the relatively fine spatial scale for five sites within the Delta (average size 51 ha) average classification accuracies were 93% for Brazilian waterweed and 73% for water hyacinth. However, at the coarser, Delta-wide scale (177,000 ha) these accuracy results were 29% for Brazilian waterweed and 65% for water hyacinth. The difference in accuracy is likely accounted for by the broad range in water turbidity and tide heights encountered across the Delta. These findings illustrate that hyperspectral imagery is a promising tool for discriminating target invasive species within the Sacramento-San Joaquin Delta waterways although more work is needed to develop classification tools that function under changing environmental conditions.
NASA Astrophysics Data System (ADS)
Rajabzadeh-Oghaz, Hamidreza; Varble, Nicole; Davies, Jason M.; Mowla, Ashkan; Shakir, Hakeem J.; Sonig, Ashish; Shallwani, Hussain; Snyder, Kenneth V.; Levy, Elad I.; Siddiqui, Adnan H.; Meng, Hui
2017-03-01
Neurosurgeons currently base most of their treatment decisions for intracranial aneurysms (IAs) on morphological measurements made manually from 2D angiographic images. These measurements tend to be inaccurate because 2D measurements cannot capture the complex geometry of IAs and because manual measurements are variable depending on the clinician's experience and opinion. Incorrect morphological measurements may lead to inappropriate treatment strategies. In order to improve the accuracy and consistency of morphological analysis of IAs, we have developed an image-based computational tool, AView. In this study, we quantified the accuracy of computer-assisted adjuncts of AView for aneurysmal morphologic assessment by performing measurement on spheres of known size and anatomical IA models. AView has an average morphological error of 0.56% in size and 2.1% in volume measurement. We also investigate the clinical utility of this tool on a retrospective clinical dataset and compare size and neck diameter measurement between 2D manual and 3D computer-assisted measurement. The average error was 22% and 30% in the manual measurement of size and aneurysm neck diameter, respectively. Inaccuracies due to manual measurements could therefore lead to wrong treatment decisions in 44% and inappropriate treatment strategies in 33% of the IAs. Furthermore, computer-assisted analysis of IAs improves the consistency in measurement among clinicians by 62% in size and 82% in neck diameter measurement. We conclude that AView dramatically improves accuracy for morphological analysis. These results illustrate the necessity of a computer-assisted approach for the morphological analysis of IAs.
Using quality assessment tools to critically appraise ageing research: a guide for clinicians.
Harrison, Jennifer Kirsty; Reid, James; Quinn, Terry J; Shenkin, Susan Deborah
2017-05-01
Evidence based medicine tells us that we should not accept published research at face value. Even research from established teams published in the highest impact journals can have methodological flaws, biases and limited generalisability. The critical appraisal of research studies can seem daunting, but tools are available to make the process easier for the non-specialist. Understanding the language and process of quality assessment is essential when considering or conducting research, and is also valuable for all clinicians who use published research to inform their clinical practice.We present a review written specifically for the practising geriatrician. This considers how quality is defined in relation to the methodological conduct and reporting of research. Having established why quality assessment is important, we present and critique tools which are available to standardise quality assessment. We consider five study designs: RCTs, non-randomised studies, observational studies, systematic reviews and diagnostic test accuracy studies. Quality assessment for each of these study designs is illustrated with an example of published cognitive research. The practical applications of the tools are highlighted, with guidance on their strengths and limitations. We signpost educational resources and offer specific advice for use of these tools.We hope that all geriatricians become comfortable with critical appraisal of published research and that use of the tools described in this review - along with awareness of their strengths and limitations - become a part of teaching, journal clubs and practice. © The Author 2016. Published by Oxford University Press on behalf of the British Geriatrics Society.
Recommendation in evolving online networks
NASA Astrophysics Data System (ADS)
Hu, Xiao; Zeng, An; Shang, Ming-Sheng
2016-02-01
Recommender system is an effective tool to find the most relevant information for online users. By analyzing the historical selection records of users, recommender system predicts the most likely future links in the user-item network and accordingly constructs a personalized recommendation list for each user. So far, the recommendation process is mostly investigated in static user-item networks. In this paper, we propose a model which allows us to examine the performance of the state-of-the-art recommendation algorithms in evolving networks. We find that the recommendation accuracy in general decreases with time if the evolution of the online network fully depends on the recommendation. Interestingly, some randomness in users' choice can significantly improve the long-term accuracy of the recommendation algorithm. When a hybrid recommendation algorithm is applied, we find that the optimal parameter gradually shifts towards the diversity-favoring recommendation algorithm, indicating that recommendation diversity is essential to keep a high long-term recommendation accuracy. Finally, we confirm our conclusions by studying the recommendation on networks with the real evolution data.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Alexander, William; Miller, George; Alexander, Preeya; Henderson, Michael A; Webb, Angela
2018-06-12
Skin cancers are extremely common and the incidence increases with age. Care for patients with multiple or complicated skin cancers often require multidisciplinary input involving a general practitioner, dermatologist, plastic surgeon and/or radiation oncologist. Timely, efficient care of these patients relies on precise and effective communication between all parties. Until now, descriptions regarding the location of lesions on the scalp have been inaccurate, which can lead to error with the incorrect lesion being excised or biopsied. A novel technique for accurately and efficiently describing the location of lesions on the scalp, using a coordinate system, is described (the 'scalp coordinate system' (SCS)). This method was tested in a pilot study by clinicians typically involved in the care of patients with cutaneous malignancies. A mannequin scalp was used in the study. The SCS significantly improved the accuracy in the ability to both describe and locate lesions on the scalp. This improved accuracy comes at a minor time cost. The direct and indirect costs arising from poor communication between medical subspecialties (particularly relevant in surgical procedures) are immense. An effective tool used by all involved clinicians is long overdue particularly in patients with scalps with extensive actinic damage, scarring or innocuous biopsy sites. The SCS provides the opportunity to improve outcomes for both the patient and healthcare system. © 2018 Royal Australasian College of Surgeons.